Application Performance optimization
Application Performance optimization can
be done by below techniques:
Designing
of the Outline using Hour Glass Model
Defragmentation
Restructuring
Compression
techniques
Cache
Settings
Intelligent
calculation
Uncommitted
Access
Data
Load Optimization
Designing
of the Outline using Hour Glass Model:
Outline should be designed in such a way
that dimensions are placed in the following order – first put the largest (in
number of members) dense dimension, then the next largest dense dimension, and
continue until the smallest dense dimension. Now put the smallest sparse
dimension, then next smallest, and continue until the largest sparse dimension
followed by the attribute dimension.
Hour glass model improves 10% of
calculation performance of cube.
Defragmentation:
Fragmentation is caused due to the
following:
Frequent
Data load
Frequent
Retrieval
Frequent
Calculation
We can check whether the cube is
fragmented or not by seeing it Average Clustering Ratio in the properties. The
Optimum clustering value is 1, if the average clustering ratio is less than 1,
then the cube is fragmented which degrades the performance of the cube.
There are 3 ways of doing
defragmentation:
Export Data of the application in text
files, then clear data and reload it without using rule files.
Using Maxl Command. Maxl > Alter
Database Appname.DBname Force restructure
Add and Delete One dummy member in the
dense dimension.
Restructuring:
There are 3 types of restructure:
Outline Restructure
Sparse Restructure
Dense Restructure / Full Restructure
Outline Restructure: When we rename any member or add alias
to any member then outline restructure would happen.
.OTL file is converted to .OTN which in
turn converts in to .OTL again.
.OTN file is a temp file deleted by
default after restructure.
Dense Restructure: If a member of dense dimension is
moved, deleted or added, Essbase restructures the data blocks, it regenerates
the index automatically so that index entries point to the new data clocks. Empty
blocks are not removed. Essbase marks all restructure block as dirty, so after
a dense restructure you must recalculate the database. Dense restructuring is
most time consuming, can take a long time to complete for large database.
Sparse Restructure: If a member of sparse dimension is
moved, deleted or added, Essbase restructure the index and creates new index
files. Restructuring the index is relatively fast. Time required depends on
index size.
Compression
technique:
When Essbase stores blocks to disk, it
can compress the data blocks using one of the following compression methods,
this is based on the type of data that is being loaded into the Essbase
database.
No Compression: It is what it says, no compression is occurring on the database.
zLib Compression: This is a good choice if your database has very sparse data.
Bitmap compression: This is the default compression type and is good for non-repeating data.
RLE (Run Length Encoding) compression: This type of compression is best used for data with many zeroes or repeating values.
Index value Pair: Essbase applies this compression if the block density is less than 3%.Index Value Pair addresses compression on databases with larger block sizes, where the blocks are highly sparse.
In most of all cases Bitmap is always
the best choice to give your database the best combination of great performance
and small data files. On the other hand much depends on the configuration
of the data that is being placed into the database. The best way to
determine the best method of compression is to attempt each type and evaluate
the results.
Caches: There are 5 types of caches
Index Cache: It is a buffer in memory
that holds index files (.ind). Index cache should be set equal to the size of
index file.
Note- Restart the database in order to
make the new cache setting come onto effect.
Data Cache: It is a buffer in memory
that holds uncompressed data blocks. Data cache should be 12.5% of PAG file
memory, by default it is set to 3MB.
Data File Cache: It is a buffer in
memory that holds compressed data blocks. Size of data file cache should be
size of PAG file memory. It is set to 32MB by default. Max size for it is 2GB.
Calculator Cache: it is basically used
to improve the performance of calculation. We set the calculator cache in
calculation script. Set cache High|Low|off. We also set cache value for
calculator cache in Essbase.cfg file. We need to restart the server to make the
changes in calculator caches after setting it in the config file.
Dynamic Calculator Cache: The dynamic
calculator cache is a buffer in memory that Essbase uses to store all of the
blocks needed for calculation of Dynamic Calc member in a dense dimension.
Intelligent
Calculation:
Whenever the block is created for the
first time Essbase would treat it as Dirty block. When we run CalcAll/Calc dim,
Essbase would calculate and mark all blocks as clean. Subsequently, when we
change value in any blocks, it will be marked as Dirty block and when we run
the script again only dirty block are calculated and this is known as
Intelligent calculation.
Be default calculation is ON. To turn of
the intelligent calculation use command Set Update Calc Off in scripts.
Uncommitted
Access:
Under uncommitted access, Essbase locks
blocks for write access until Essbase finishes updating the block. Under
committed access, Essbase holds locks until a transaction completes. With
uncommitted access, blocks are released more frequently than with committed
access. The Essbase performance is better if we set uncommitted access. Beside
parallel calculation only works with uncommitted access.
Data
load Optimization:
Data load optimization can be achieved by the following
Always load the data from Server than
File System.
The data should be as last after the
combination in the data load file.
Should use #Mi instead of zero (0). If
we use zero it use 8 bytes if memory for each cell.
Restrict max decimal points to ‘3’ like
1.234
Data should be loaded in the form of
Inverted Hour Glass Model.
Always pre-Aggregate data before loading
data in to database.
These are just
the initial general optimization points which can cause huge performance
improvements without too much effort, generally these ones should handle 70% of
our optimization issues.
Hope this Helps.
Greetings
SST!
No comments:
Post a Comment