Add a Dimension to ASO without Breaking Aggregate Views

This post is a quick follow-on to my last, inspired by the same piece of client work. Fair warning: it’s only going to make sense if you are already somewhat familiar with aggregate views and view definition scripts (.csc). If you’re not already familiar with the concepts but want to read this anyway, I’d refer you to a presentation given at Kscope11 as an excellent (ahem) primer on the topic (free associate membership of ODTUG required).

But in summary: Many people maintaining larger or complex ASO cubes have developed very carefully crafted sets of aggregate views to optimize query performance. They also know that, unfortunately, some structural changes can invalidate those view definitions – adding levels to stored dimensions and adding new stored dimensions to name two. This can necessitate a lot of painstaking, trial-and-error optimization to generate a new set of aggregate views that provide equivalent performance to the original set.

In the course of adding a new dimension to an existing cube, I realized that there was a straightforward way to preserve the validity of my existing set of aggregate views.

As I said, I’m going to assume a certain degree of familiarity with the concepts. Documenting the derivation of aggregate view ID’s is definitely on my list (I’ve been promising to finish a fully-functional aggregate view calculator since 2011, and every year at Kscope someone will embarrass me by asking if it’s finished yet). But I will take a quick look at what’s inside an aggregate view definition script. Here’s an ASOsamp.Sample example:

3
4142187940
0
1
17
0.980087482822801
139
0.561948262991848

There are three components to these scripts. First, we have a count of the views described by the script, including counting the input data as one view:

3

Second, an “outline ID”. Essentially, the outline ID is a number that will change if the number of stored levels in the cube changes. This is the snitch the tells Essbase “hey, your view definition script is no longer valid” if you make one of the relevant structural changes and attempt to re-run the existing script:

4142187940

Third, we have the aggregate views themselves, each represented by a pair of numbers – the view “ID” followed by the estimated view size as a proportion of the input data size.

0
1
17
0.980087482822801
139
0.561948262991848

View IDs are integer values that encode the selection of a specific level in each stored dimension. Without providing a full mathematical explanation of view ID derivation, I’ll make the following observations:

  • The level selection in each stored dimension is encoded as a number
  • A level selection of zero is always encoded as a zero, for any dimension
  • The numbers representing the level selections for each stored dimension are added up to calculate the final view ID
  • The possible encoded values for the earlier (higher, in the Outline Editor view) dimensions are smaller numbers than the possible encoded values for later stored dimensions, such that the final view ID is always unambiguous. This isn’t precisely how it’s done, but imagine the first dimension selection being encoded as the least significant bits of a binary number, with later dimensions being encoded in successively more significant bits

There’s a really handy if not-totally-intuitive consequence of all this:

If you add a new stored dimension to a cube as the last dimension, any existing view IDs are still valid, and represent the same set of level selections from existing dimensions plus level zero in the new dimension.

This set of aggregate views is probably a really good starting point for further optimization – likely a whole lot better than starting from scratch, if you have spent a bunch of time crafting those view selections. It may even be already sufficient for good query performance.

There is one small wrinkle, and that’s that the above  doesn’t quite mean that the existing aggregate view definition script can be used – because the outline ID will still change when adding a new dimension, and Essbase will refuse to process a script with a non-matching outline ID. The solution to this is to get the new outline ID with MaxL. It’s included in the output of the following command (and it doesn’t matter whether the cube actually has any aggregate views or not):

query database APP.DB list existing_views;

Then manually update the existing script to replace the old outline ID with the new one.

 

 

More from my site

3 thoughts on “Add a Dimension to ASO without Breaking Aggregate Views

  1. Genius!

    I never thought of this – guess we should speak more often. To wit we really should finish the spreadsheet to create csc files. All we really needed was a better understanding of how alternate hierarchies were handled (and of course which hierarchy was primary, secondary etc). I think KR said he might be able to help with that.

    But of course you know I will not let you entirely off the hook without saying: Why do your cubes require carefully generated or query based aggregation? Why don’t you or anyone else reading this take me up on my ASO Challenge? http://ntuple.net/theasochallenge.html

  2. Thanks for the post and the good idea to come around this problem.
    Slightly different issue…
    What happens when there is an additional level in the structure? So deepest Generation is 4 and now this Level is detailed further into Gen 5.
    The users reports will not change that much, but I think the aggregates do not fit anymore for that dimension. Is there a way to fix that?
    A converter back and forth?

  3. Hi Philip – the short answer is “yes, but not so easily”.

    The most effective but difficult option is to take the algorithm for generating the view ID and generate new view IDs to apply the old selections to the new structure. A co-worker of mine has written PL/SQL code that actually analyzes the metadata, detects when this situation has arisen on converts the views automatically.

    I’m not sure whether that code can tell the difference between an additional “lower” level (i.e. more detail) and a new “higher” level (i.e. a new consolidation) – you might want to choose a different level for your new view in those two situations.

    A simpler but obviously less comprehensive solution is to use query tracking with a set of queries at appropriate levels, and run those to drive view selection each time.

Leave a Reply to Dan Pressman Cancel reply

Your email address will not be published. Required fields are marked *