Wednesday, December 24, 2014

Data is not getting Saved for a Member in Data form

Can't Save Data for a Member in Planning Data Form

With Essbase implied sharing, some members are shared even if you do not explicitly set them as shared. These members are implied shared members. When an implied share relationship is created, each implied member assumes the other member’s value. Essbase assumes (or implies) a shared member relationship in these two situations:

1. When a parent is having single child

2. When parent is having single child and it is consolidates to the parent

In a Planning form that contains members with an implied sharing relationship, when a value is added for the parent, the child assumes the same value after the form is saved. Likewise, if a value is added for the child, the parent usually assumes the same value after a form is saved.For example, when a calculation script or load rule populates an implied share member, the other implied share member assumes the value of the member populated by the calculation script or load rule. The last value calculated or imported takes precedence. The result is the same whether you refer to the parent or the child as a variable in a calculation script.


Now the issue which we are going to talk about is We loose data on save even when the parent is dynamic calc and has a single child.

A dynamic calc parent to a single child:






If we design the form with following selection shown in below screen shot:











In the planning data form we will find parent is below the member and this is by design whenever you make a selection using commands to select all member below parent










Lets enter the data in the data form










Save the data













Our data wiped out.

Now, we will change the selection of member while creating a data form











Here we go, data saved.











Now the question again why this behavior:

Data from Planning data form passes to Essbase row by row, because in data form the child member appears before the parent. First, data goes to Essbase for child (Single Store Child), then when Planning passes the data for parent there was #Missing or No data, Over writes the data to #missing.

Note : As we know that dynamic calc members are calculated on the fly they are not allocated with any memory in the Essbase, here the parent was dynamic calc and it was pointing to same memory as child in the background, when Planning was passing data to Essbase for second row it has updated the child with missing data.

For more information you can refer below link of Oracle Document regarding Implied Sharing.


Hope this helps!

Greetings
SST!

Monday, December 22, 2014

Business Rule Migration from One Server to Another using Export Import in EAS.


Business rule  Migration from one Server to Another using Export Import in EAS

When we export the Business Rules (BR's), EAS exports the BR's in a xml file format, this xml files contains all the BR's, Security (if you are not exporting for Calc Manager), locations etc. Now if we are trying to migrate the BR's from say Production to Development environment, we have to update the location of the BR's in the xml file because when we export our xml file, it have the location of BR's of Production Server and here we are trying to migrate the rules to Development, so we need to update the xml and modify the location of these BR's.

Here are the steps to export and import rules from one server to another:

1. Log into EAS console of Source Environment using an admin id.
2. Right Click on Business Rules Node.
3. Click on Export.
4. In the right hand pane you will get all the BR's listed. Click on Select All.
5. Uncheck For Calc Manager.
6. Click on Dependents.
7. Export the rules.
8. This will ask to save the BR's in a XML format, save the xml file.
9. Save As the xml file with a different name.
10.Edit the xml file and replace all the Old locations of BR's with the new one, the location is case sensitive and there will be two location we need to replace small case with the small one and Upper case with the Upper one, 
Ex: A part of exported .xml
Source .xml:
<Location>
<property class="int" method="setLocID" value="41"/>
<property class="int" method="setLocationID" value="41"/>
<property class="java.lang.String" method="setLocation" value="ProdClusterMXYZ"/>
<property class="java.lang.String" method="setUpperLocation" value="PRODCLUSTERMXYZ"/>
<property class="int" method="setCluster" value="-1"/>
</Location>

Say if our Destination Location is: "Planning/WDEPMR0543/PLAN/XYZ"

In this case in your source file replace All:

ProdClusterMXYZ with Planning/WDEPMR0543/PLAN/XYZ

PRODCLUSTERMXYZ with PLANNING/WDEPMR0543/PLAN/XYZ

12. Save the file.
13. Login to the destination EAS console.
14. Right click on Business Rule node.
15. Import the BR's using the modified .xml file.

Hope this Helps.

Greetings
SST!

Monday, October 27, 2014

How to avoid implicit share in config file.

When does Implicit sharing happen in Essbase? And what are the ways we can define them in Essbase config file?

Cause & Solution

1) A parent has only one child. In this situation, the parent and the child contain the same data.
Essbase ignores the consolidation property on the child and stores the data only once — thus the parent has an implied shared relationship with the child.

2) A parent has only one child that consolidates to the parent. If the parent has four children, but three are marked as no consolidation, the parent and child that consolidates contain the same data. Essbase ignores the consolidation property on the child and stores the data only once — thus the parent has an implied shared relationship with the child.

3) The following commands can be inserted in the CFG file to disable Implied Sharing:
IMPLIED_SHARE FALSE/TRUE (This shall Turn OFF/ON implied sharing for All Application on that Essbase Server)
IMPLIED_SHARE [app_name] FALSE/TRUE (This shall Turn OFF/ON implied sharing for Specified Application on that Essbase Server)
Tagging the Outline Members “Never Share” Turns OFF implied Sharing for those Members and not tagging Turns ON implied sharing.

Order of Prevalence
Member level Implied Sharing Setting – Prevails over Application, Server Setting
Application level Implied Sharing Setting – Prevails over Server Setting
Server level Implied Sharing Setting – Least


4) 
ImpliedShare.txt exists in E:\Oracle\Middleware\user_projects\epmsystem1\ EssbaseServer\essbaseserver1\bin


Hope this Helps.

Greetings
SST!

Application Performance optimization

Application Performance optimization

Application Performance optimization can be done by below techniques:
Designing of the Outline using Hour Glass Model
Defragmentation
Restructuring
Compression techniques
Cache Settings
Intelligent calculation
Uncommitted Access
Data Load Optimization

Designing of the Outline using Hour Glass Model:
Outline should be designed in such a way that dimensions are placed in the following order – first put the largest (in number of members) dense dimension, then the next largest dense dimension, and continue until the smallest dense dimension. Now put the smallest sparse dimension, then next smallest, and continue until the largest sparse dimension followed by the attribute dimension.
Hour glass model improves 10% of calculation performance of cube.

Defragmentation:
Fragmentation is caused due to the following:
Frequent Data load
Frequent Retrieval
Frequent Calculation
We can check whether the cube is fragmented or not by seeing it Average Clustering Ratio in the properties. The Optimum clustering value is 1, if the average clustering ratio is less than 1, then the cube is fragmented which degrades the performance of the cube.

There are 3 ways of doing defragmentation:
Export Data of the application in text files, then clear data and reload it without using rule files.
Using Maxl Command. Maxl > Alter Database Appname.DBname Force restructure
Add and Delete One dummy member in the dense dimension.

Restructuring:
There are 3 types of restructure:
Outline Restructure
Sparse Restructure
Dense Restructure / Full Restructure

Outline Restructure: When we rename any member or add alias to any member then outline restructure would happen.
.OTL file is converted to .OTN which in turn converts in to .OTL again.
.OTN file is a temp file deleted by default after restructure.
Dense Restructure: If a member of dense dimension is moved, deleted or added, Essbase restructures the data blocks, it regenerates the index automatically so that index entries point to the new data clocks. Empty blocks are not removed. Essbase marks all restructure block as dirty, so after a dense restructure you must recalculate the database. Dense restructuring is most time consuming, can take a long time to complete for large database.
Sparse Restructure: If a member of sparse dimension is moved, deleted or added, Essbase restructure the index and creates new index files. Restructuring the index is relatively fast. Time required depends on index size.

Compression technique:
When Essbase stores blocks to disk, it can compress the data blocks using one of the following compression methods, this is based on the type of data that is being loaded into the Essbase database.

No Compression:  It is what it says, no compression is occurring on the database.
zLib Compression:  This is a good choice if your database has very sparse data.
Bitmap compression:  This is the default compression type and is good for non-repeating data.
RLE (Run Length Encoding) compression:  This type of compression is best used for data with many zeroes or repeating values.
Index value Pair: Essbase applies this compression if the block density is less than 3%.Index Value Pair addresses compression on databases with larger block sizes, where the blocks are highly sparse.
In most of all cases Bitmap is always the best choice to give your database the best combination of great performance and small data files.  On the other hand much depends on the configuration of the data that is being placed into the database.  The best way to determine the best method of compression is to attempt each type and evaluate the results.

Caches: There are 5 types of caches
Index Cache: It is a buffer in memory that holds index files (.ind). Index cache should be set equal to the size of index file.
Note- Restart the database in order to make the new cache setting come onto effect.
Data Cache: It is a buffer in memory that holds uncompressed data blocks. Data cache should be 12.5% of PAG file memory, by default it is set to 3MB.
Data File Cache: It is a buffer in memory that holds compressed data blocks. Size of data file cache should be size of PAG file memory. It is set to 32MB by default. Max size for it is 2GB.
Calculator Cache: it is basically used to improve the performance of calculation. We set the calculator cache in calculation script. Set cache High|Low|off. We also set cache value for calculator cache in Essbase.cfg file. We need to restart the server to make the changes in calculator caches after setting it in the config file.
Dynamic Calculator Cache: The dynamic calculator cache is a buffer in memory that Essbase uses to store all of the blocks needed for calculation of Dynamic Calc member in a dense dimension.

Intelligent Calculation:
Whenever the block is created for the first time Essbase would treat it as Dirty block. When we run CalcAll/Calc dim, Essbase would calculate and mark all blocks as clean. Subsequently, when we change value in any blocks, it will be marked as Dirty block and when we run the script again only dirty block are calculated and this is known as Intelligent calculation.
Be default calculation is ON. To turn of the intelligent calculation use command Set Update Calc Off in scripts.

Uncommitted Access:
Under uncommitted access, Essbase locks blocks for write access until Essbase finishes updating the block. Under committed access, Essbase holds locks until a transaction completes. With uncommitted access, blocks are released more frequently than with committed access. The Essbase performance is better if we set uncommitted access. Beside parallel calculation only works with uncommitted access.

Data load Optimization: Data load optimization can be achieved by the following
Always load the data from Server than File System.
The data should be as last after the combination in the data load file.
Should use #Mi instead of zero (0). If we use zero it use 8 bytes if memory for each cell.
Restrict max decimal points to ‘3’ like 1.234
Data should be loaded in the form of Inverted Hour Glass Model.
Always pre-Aggregate data before loading data in to database.

These are just the initial general optimization points which can cause huge performance improvements without too much effort, generally these ones should handle 70% of our optimization issues.


 Hope this Helps.

Greetings
SST!

Thursday, October 23, 2014

Execute the Batch File in Workspace

Follow the steps given below, and you can run bat files in Workspace.

Step 1: Create a Job Application

Create a generic job application as shown below. (Navigate -> Administer -> Reporting and Analysis -> Generic Job Applications)

·         Provide a Product Name (this is the name that shows up as Job Factory Application) when importing a job.
·         Select Product Host. (This is the Workspace server).
·         In Command Template type $PROGRAM $PARAMS, you can also click the buttons to insert Command Template. (This is where it is different from the MaxL job application).
·         Provide the Executable as %WINDIR%\System32\cmd.exe

Step 2: Import a bat file

Import a bat file into Workspace as a job. (Import file as job)


Check "Import as Generic Job" check box.


Select the newly created Job Application and click Finish.


Step 3: Run the job

You can now execute the bat file by double clicking on it or right click -> Run as Job



Provide a path for output and click "Run"

I've MaxL files kept on Workspace server, which was executed by the bat job.


Greetings
SST!

Last Logged in Time by User

How can I monitor the last logged in time of a user?

In EAS
We can generate Log Charts for Server (or Application) and select “Logged Users” as filter and refresh the chart. This will upload the log entries into a table called SERVERLOGDETAIL on EAS schema (relational database)



We can then run a SELECT statement as given below against that table and it’ll give you the last login time of users.

SELECT username as "User Name", MAX(entrydate) AS "Last Login Date" 
FROM serverlogdetail where username is not null
GROUP BY username;

If we are looking for an Application specific record then we can use the below given SELECT statement.

SELECT username as "User Name", MAX(entrydate) AS "Last Login Date" 
FROM serverlogdetail where msgtext like 'Setting application ASOsamp %'
GROUP BY username;

Planning Login details
There is no straight forward check if we want to check the last login time of a user in Planning.

We can make use of auditing options in Planning. We can enable Auditing for Data and Business Rules.
You can read more about the same in this link http://www.oracle.com/technetwork/middleware/planning/tutorials/index-091248.html

If you enable Auditing for Data and then you can run a query as given below. (This should give the last data entry made by the user)

SELECT USER_NAME AS "User", time_posted AS "Last Login Time" FROM HSP_AUDIT_RECORDS WHERE TYPE = 'Data';

Similarly you can expand the SQL to check whether a user ran a Business Rule or not and then get his Last Login details.


Greetings
SST!

Wednesday, October 22, 2014

Oracle EPM Auto Log out Time or Hyperion Workspace time out sessions

When we are working on Hyperion planning the below issue is normal:

How to change the Oracle EPM Workspace settings to extend the amount of time before a user is automatically logged out due to inactivity.

Follow the below steps to Increase Session Timeout and Keep Alive Interval.

1. In the Shared Service navigate to Application Groups -> Foundation -> Deployment Metadata.
2. Open Shared Services Registry -> Foundation Services -> Workspace -> WebServer ->WorkSpace WebApp@_45000.
3. Select "Properties", right click, Select "Export for Edit".
4. Save the file locally and open it in a text editor.
5. Search for SessionTimeout. Edit the Value as per your requirement (e.g. SessionTimeout=30).
Save the file.
6. Search for KeepAliveInterval. Edit the Value as per your requirement (e.g. KeepAliveInterval=30). Save the file.
7. Select "Properties", right click, select "Import after Edit".
8. Browse for the file that was edited and click "Finish".

9. Restart the Workspace Web Application Services.

Greetings
SST!!

Data Block & Index System

Data Block (Building Block of Essbase) & Index System

Data values are defined as the intersection of one member from one dimension with one member from another dimension. A data value is stored in one cell in the database.  In order to refer to that specific data value in a multidimensional database you need to specify its members from each dimension.
  
Essbase stores and accesses data through the use of data blocks and the index system. A data block is created for each unique combination of sparse dimension members.  Essentially, the data block represents all the dense dimension members for its combination of sparse dimension members.

Every time a data block is created, Essbase then creates an index entry.  The index is comprised of combinations of sparse dimension members.  There is one entry for each combination of sparse dimension members where a data value exists. The data block is a fixed format data structure the existence of which is driven by data-relevant sparse member combinations in the index. By data-relevant we mean that only where business data actually exists across sparse member combinations will a data block be generated.

In the example below, Entity, Scenario, and Year are sparse dimensions, while Accounts and Periods are dense dimensions.  When Essbase searches for a data value, it is using the index to locate the block containing that data value.  Once the data block is located, it targets the exact cell containing the data value.  Essbase is able to handle sparse data so well because the index provides a pointer to the correct data block.  Once the data block is located, Essbase can quickly retrieve the data value.  In the example below, let’s say you are searching for Budgeted Expense for NY in February 2013.   The index provides a pointer using the sparse dimensions to locate the data block containing the data for NY, Budget, 2013. The data value in question is then contained at the intersection of Expense and Feb.



Introduction about Essbase

Essbase stands for Extended SpreadSheet dataBASE.

Essbase is a multi-threaded OLAP database software that takes advantage of symmetric multiprocessing hardware platforms - is based on Web-deployable, thin-client architecture. The server acts as a shared resource, handling all data storage, caching, calculations, and data security.

Essbase Architecture

Essbase product incorporate powerful architecture feature to handle a wide range of analytic application across the large multiuser environments.

Essbase product family feature a middle tier architecture to handle a wide range of analytic application across large multiple-user environments.
The database tier consists of Essbase Server (where Essbase database are stored) and any relational databases that you are using to support Essbase environment.
The client tier includes locally installed client application, such as EAS & Smart View
The middle tier includes application services that facilitates the communication and data transfer between the database tier and client tier.



Essbase Storage Models

Essbase supports two storage types: Block Storage (BSO) and Aggregate Storage (ASO). Essbase can support many different business model by using these two models.
Below screen shot provide you a partial list of business analyses that you can model in Essbase, with suggestion for the storage type that best meets the challenge of each business model.




Block Storage
Block Storage databases are optimized for data set that are partially dense. Data is stored in dense data blocks which are indexed along sparse dimension for retrieval.

This storage paradigm enables you to perform:
  • Top down budgeting and planning
  • In addition to sophisticated pre-aggregation calculations


Aggregation Storage
Aggregate storage database are optimized for sparse data set that primarily require simple aggregation. Any non-aggregation calculations are performed dynamically when requested in reports. Incremental loading and fast aggregation can provide near real-time analysis of transactional data.

Aggregate storage databases enable dramatic improvements in both:
  • Database aggregation time
  • Dimensional scalability.



Oracle Planning and Budgeting Cloud (PBCS) - August 2018 Updates

The following announcements and considerations are outlined in the upcoming  Oracle Planning and Budgeting Cloud (PBCS)  update: Oracl...