Reverse-Engineering Block Density Statistics

Block density – that is, the percentage of non-missing dense intersections at a non-missing sparse intersection – is an important performance parameter for BSO databases, but the Essbase documentation does not provide details of the algorithm used to derive the ‘Average Block Density’ statistic visible in EAS or via MaxL.  I have heard it said many times that the statistic is based on only a sample of actual blocks, and while this seems plausible I have never seen any empirical evidence.  I have also heard it suggested that the statistic takes into account (only? also?) blocks which have passed through the data cache, which would presumably weight the statistic toward blocks which have been queried or ‘touched’ for calculation purposes as time went on.

With no authoritative answer available, I thought it would be interesting to try and reverse-engineer the algorithm.  The following post includes both the methodology and the calculation scripts used, so others can check my working or validate the results on their own installations (if you have no interest in the methodology but still want the results, skip to the conclusion!)

An initial clue is that viewing database statistics immediately after loading a BSO database – before performing any queries or calculations – always shows that exactly 100 blocks have been read from the .pag file(s)1.  This is true not only for a tiny (475 possible blocks) database such as Sample.Basic, but also for much larger ‘real-world’ databases.  Perhaps these 100 blocks are the sample from which the block density statistic is derived?

Statistics dialog showing 100 blocks read from page files

To test that hypothesis it would be helpful to know which 100 blocks have been read.  The first 100?  The last?  A random selection?  Unfortunately, the only method I could come up with to figure this out is painfully indirect:  Fire a query against one block at a time, refreshing the database statistics and watching for the “number of blocks read” statistic to increment.  If the queried block is one of the 100 already read, it will be found in the data cache and no increase in the number of blocks read from the .pag file will be observed.  If the block is not among those initial 100, Essbase will have to retrieve it from disk, and the number of blocks read from the .pag file will increment by one.

I wrote a simple calculation script2 to create every potential block in Sample.Basic, and set the size of the data cache comfortably high enough to hold all 475 blocks in uncompressed form3.  I then restarted the application – to clear out its data cache – and queried each block in turn, in ‘block number’ order4

Here are the first 30 results:

First 30 Block Results

Happily, the pattern doesn’t seem too hard to make out:  Every fourth block has been read.

So far, so good – but are these, in fact, the blocks that drive the density statistic?  If so, it should be possible to control the density statistic by manipulating the data in just these 100 blocks.  Conversely, the density of any other block in the database should make no difference to the density statistic.

After running my original script (which populated just one cell for every potential block) the block density statistic returned 0.52%, which is as expected (there are 192 stored cells in each Sample.Basic block, and 1/192*100 = 0.52).   I created another calculation script5 to force the density of only every fourth block to 50% and ran it.  Immediately, the block density jumped to exactly 50.00%.  How about setting the density of only every fourth block to zero?  I created yet another calculation script to set the density of all the blocks to 50%,  and then reset only every fourth block to #Missing 6 (without actually removing the block).  Sure enough, the reported density statistic fell to 0%, despite the database containing 375 blocks with a density of 50%, and only 100 with a density of 0%!

Lastly, I ran a calculation script to set just the 400th block to have a density of 100%7.  No prizes to be awarded for guessing that the average block density statistic jumped to 1%, since the 100-block sample now comprised 99 empty blocks and one with 100% density8.

The above tests, however, leave several questions unanswered:

  • Does calculating or retrieving other blocks change the density statistic (in other words, does touching blocks cause them to be included in the density sample)?
  • What happens when not all blocks exist?
  • Are the 100 sampled blocks always four blocks apart?  If so, what happens when there are less than 400 blocks?

The first question was easily addressed.  Querying every Product for Market “Market” (the last 19 blocks in the database according to standard block number) resulted in no change in the density statistics.  Likewise running another calculation script that touched just the “Market” blocks9.  It seems reasonable to conclude that the same 100 blocks are used for deriving the density statistic regardless of database activity.

Exploring what happens when not all blocks exist is a little more complicated.  I ran a calculation script10 to completely remove the “200-10″->”New York” block (the second sampled block in the table above), then repeated the block-by-block query test:

First 10 Block Results after CLEARBLOCK on 200-10->New York

Notice that “200-10″->”New York”, between “100”->”New York” and “200-20″->”New York”, is not an existing block.  On finding that a block to be sampled is missing, Essbase has stepped forward to the next block (“200-20″->”New York”), and then continued sampling every fourth block from “200-20″->”New York” onward (i.e. all subsequent samples have  been shifted forward by one position compared to the original results above).

What about deleting one of the blocks that not in the original sample?  I ‘reset’ the database by creating every block, and ran another calculation script11 to delete only “100-20″->”New York”:

First 10 Block Results after CLEARBLOCK on 100-20->New York

In this case, the removal of a block makes no difference.  Essbase looks for every fourth block, finds that they all exist, and the same set of blocks as in the original example are sampled.

Finally, a 100-block sample of every fourth block is clearly not possible if only 300 blocks exist.  To determine what happens in this case I ran a CLEARBLOCK calculation script fixed on Market “Central” and its descendants12, reducing the Sample.Basic block count to just 342 blocks.  This time the results look different – though the sample size is still 100 blocks, every third block is selected, which gives a heavy hint as to the underlying general rule!

First 10 Block Results with Fewer than 400 Existing Blocks

In conclusion, the algorithm behind the block density statistic appears to be:

  • Sample the first existing block
  • Sample every nth block according to standard Essbase block numbering, where  n is the total number of existing blocks / 100, ignoring any remainder
  • If the nth block does not exist, sample the next higher-numbered existing block, and resume working forward from that position in increments of n
  • Sample exactly 100 blocks
  • Average the density of the 100 block sample

One further point to consider.  In the examples above I was able to manipulate the block density statistic by modifying specific blocks;  I did wonder if there might be real-world examples where misleading results are produced by (for example) finding all 100 sampled blocks associated with a single, unrepresentative, member of a particular sparse dimension.  While this is theoretically possible it seems extremely unlikely in practice.  It would require both an ‘unlucky’ relationship between the sizes of the sparse dimensions and total block count, and a consistent distribution of existing blocks across possible block numbers.  In short, despite only sampling what may be a very small fraction of the blocks in real-world databases, I believe that this is a pretty robust algorithm.

Show 12 footnotes

  1. It seems this statistic is not available in MaxL, or perhaps not without enabling statistics gathering
  2. CrtAll.csc
  3. This ensures that a single block is never read from .pag file twice, due to having been cycled out of a full cache
  4. To understand block order, see the DBAG entry for Block Calculation Order (11.1.2.3)
  5. 100Blk50.csc
  6. 100Blk0.csc
  7. 1Blk100.csc
  8. This is strong evidence that all 100 blocks sampled are feeding the density statistic
  9. MktOnly.csc
  10. RmNY2001.csc
  11. RmNY1002.csc
  12. ClrCntrl.csc

4 thoughts on “Reverse-Engineering Block Density Statistics

  1. Now that is cool! Not only how the block density is calculated but how you went about figuring it out. Thank you for helping to fill in the missing pieces of knowledge.

  2. Thank you very much for your research. Professional work. We in the field had always some doubts about what this statistic is telling. We felt it was not telling much about what is really happening in the page file. Thanks to your work and sharing it, we now know about the limitation and can work with it.
    I have to repeat this, it is sad Oracle does not come with this kind of background information which helps us using their products.
    On the other hand, if they did, what should we blog about? 😉
    Regards,
    Philip

Leave a Reply

Your email address will not be published. Required fields are marked *