At long last, Oracle Analytics Cloud – the Oracle Cloud offering that includes Essbase Cloud along with BICS and DV – is on sale. I’m going to have a lot to say in more detail about this product over the coming weeks, but as a long-time “Essbase guy”, my first blog post will be aimed at people who already use Essbase on-premises, and some of the questions I think they’ll have: Why would you want to consider Essbase Cloud? How does it differ from the on-premises product? If Essbase Cloud is compelling, how would you move existing applications to test it out?
Why Essbase Cloud?
Essbase Cloud offers all of the “standard” cloud benefits:
- Have someone else manage infrastructure, updates and backups
- Get up and running quickly and cheaply on a subscription model
- Scale infrastructure as needed, and pay accordingly
- If Essbase follows the PBCS model, get new features first and faster
However, Essbase Cloud also provides a different set of functionality to the current on-premises version.
What Are The Differences? On-Prem Plus…
One thing to be clear about is that Essbase Cloud is absolutely not simply on-premises Essbase hosted on someone else’s server. Whilst the core engine might be very similar, there is a completely different interface and bunch of features not (currently, at least) found in the on-premises product.
The basic interface will look somewhat familiar to anyone who has used other Oracle EPM/BI cloud products!
Outline editing does not use a conventional treeview control, which frankly feels a little clunky. While navigation up / down is intuitive, only a member and it siblings are visible at any one time. I can’t imagine enjoying navigating a very large or very complex / deep outline like this.
The exciting new features have been trailed for a while at conferences (Cameron Lackpour and I spoke about some of them at Kscope16) and by Oracle in various fora. First up is “unstructured” data import. The idea here is that analysts can take an Excel spreadsheet, and Essbase will be smart enough to identify the dimensions, levels, attributes, measures and so on. There’s a nice sample file that can be used to demonstrate this (I think Opal Alapat’s blog post linked below shows it in action):
Essbase looks at a spreadsheet like this and via some combination of the column headers and content analysis, comes up with its best guess of how to turn this into a cube. The theory here is that analysts will be able to spin off cubes very quickly like this, either to prototype models or for short-term requirements (the fact that Data Visualization is bundled with OAC means that a user could spin off a quick cube from an Excel sheet and produce some visualizations without development help). But one thing to note is that the above isn’t completely unstructured data in the sense that the Big Data people use the term. While it is much easier to feed this to Essbase than it is to generate a bunch of individual dimension files, build load rules, and so on, it’s really still very “structured”. It doesn’t (yet?) quite get to the point of allowing an analyst to be completely ignorant of how Essbase works and yet still be able to build a cube from an analytic spreadsheet (which I think is the “dream”).
Second is a new Excel-based cube template format. This is much more structured than the format shown above, but it has the advantage of also being much more flexible. You definitely need to know something about Essbase to work this format (for example – it uses member property codes), but I quite like being able to work in Excel and then import into the Essbase Cloud server in one step. Easier than writing a MaxL script or running a bunch of load rules, and I suspect some of the power users who can work with EAS will still prefer staying in Excel.
A nice new addition to this functionality since the previews that were shown last year is a Smart View extension called “Cube Designer”. Cube Designer (amongst other things) works with spreadsheets in this special template format, pushes them to the cloud on demand, provides a nice treeview based on the content of each sheet and various other “helpful” functions. Between these spreadsheets, Cube Designer and Smart View, there is truly an “all-in-Excel” environment for developing, loading and reporting from Essbase cubes. Cube Designer is something I’ll definitely be returning to in future posts.
Third, we have “Scenario Management”, which provides sandboxing and lightweight approvals. Scenario management has been built intelligently so that only changed data is captured, rather than having to make a full data copy for every “scenarios”. This is very efficient, and permits very large numbers of “scenarios”.
Fourth, run-time substitution variables actually work from Smart View (i.e. users can launch a calculation script and be prompted for member selections). This feature is taking a while to appear in the on-premises version! These last two taken together hint at some “Planning Lite” type applications that could be built in Essbase Cloud.
What Are The Differences? On-Prem Minus…
So having briefly recapped important new / added functionality in Essbase Cloud, I have to note some “missing pieces”.
- When cubes are built using the new methodologies, the system creates load rules. However, there is currently no GUI load rule editor in the Cloud interface.
- There is no facility for partitioning, at least in the interface (I haven’t attempted to import partitions from LCM)
- There is no SQL connectivity
- Some more advanced features aren’t visible / available in the GUI – to give one example, there is no option to load incrementally / merge slices in an ASO cube
- There is no MaxL (right now)
Taken together the limitations will definitely restrict the types of systems that can be built, and the set of systems that can be migrated successfully from on-premises installations.
How Would I Get There?
Suppose you have existing applications that you’re interested in migrating over to the cloud, perhaps to “kick the tires”, perhaps because they would benefit from sandboxing, or perhaps because they are smaller applications cluttering an overloaded server.
Well, step one is to talk to your friendly Oracle sales representatives. They are a reticent bunch, but they may be willing to sell you a cloud product if you twist their arms. 🙂 I’m not going to get into the licensing or pricing model here for reasons of space. Once you start talking about shelving on-premises license capacity in exchange for cloud license credit things get complicated fast.
But with a working Oracle Analytics Cloud instance, there are several ways to get an existing cube up to the cloud. This is not intended to be an exhaustive, step-by-step walkthrough, but a summary of the options with some thoughts on their relative merits.
The dbxtool is one of a number of utilities that can be downloaded directly from the Essbase Cloud service’s Utilities screen. The purpose of the tool is to connect to an existing on-premises cube, and generate an Excel spreadsheet in the structured template format for upload to Essbase Cloud.
After downloading the utility, it is run from the command line with parameters to point it at the on-premises “source” cube and a name for the Excel file to be created:
The utility creates an .xlsx file that looks just like the example shown above. This is then imported to the Essbase Cloud server, using the Import option on the home page view:
There is a “special” version of LCM as another downloadable utility in Essbase Cloud. This utility, like dbxtool, is intended to connect to on-premises instances and then produce the required artifacts for import to Essbase cloud. The export syntax is fairly self-explanatory…
./EssbaseLCM.sh export -server myserver:myport -user myuser -password mypassword -application myapp -zipfile myzip
In theory, at least, this option should produce a more comprehensive set of artifacts (e.g. calc scripts, and, I have heard, perhaps even partitioning – which does not currently have a UI). LCM exports from Essbase Cloud can be triggered with the command line interface (see below in the “Automation” section).
If you have transferred a cube using a template spreadsheet (created by the dbxtool, for example) you can still upload files – data files, rules files, calculation scripts and so on – manually. Practical for small numbers of files. Incidentally, the Essbase Cloud menus are very “context sensitive”. For example, the Files option only “ungreys” when selecting a database. It’s easy to get lost and I find myself clicking “Home” a lot!
Security migration is going to be interesting, because role definitions are very different in Essbase Cloud. Currently there are only three roles:
- Service Administrator – the top-level administration role
- Power User – can create applications, share access to these applications
- User – access granted to specific applications only (including filters and calc scripts), no create privileges
I have to say I am big fan (given the overall philosophy of Essbase Cloud of re-empowering analysts) of having a role that permits the creation of new “personal” applications without being an “overall” system administrator. This doesn’t exist in on-premises Essbase.
Automation of existing apps shipped over from on-premises is going to be very interesting. Essbase Cloud has a concept of “Jobs” (data loads, clears, dimension builds, script executions) but no built-in scheduler.
The only current option for automation (excluding the Java API, which I’m led to believe can also connect to Essbase in the cloud) is another utility called the “Command Line Tool” / EssCLI – the actual script is called “esscs”. This communicates with Essbase Cloud via a REST API. Because of this, each command is a separate invocation of the utility. For example, to log in, upload a flat file, trigger a dataload and logout you would make four calls to invoke esscs:
- upload file
- run data load
This isn’t like e.g. MaxL, where everything can run within a single essmsh process. So the above process looks like this in esscs:
./esscs.sh login -user user -password password -url server:port/essbase Details: user "user" logged in ./esscs upload -application Sample -db Basic -file Data1.txt File "Data1.txt" Uploaded ./esscs dataload -application Sample -db Basic -file Data1.txt -abortOnError true ..Status: 200 Details: Completed ./esscs logout Details: user "user" logged out
What doesn’t currently work (although the command is present in esscs and documented in its internal help) is passing in a MaxL script. This is not inconsequential, because the other esscs commands are limited to very basic features (push files around, trigger loads and calcs).
There is a lot that MaxL can do that these commands alone do not enable. Just to give one example, triggering a slice merge, or loading via buffers, or defining aggregations which are vital capabilities for large ASO cubes. At present I don’t see any way applications that depend on these features could be migrated to cloud.
Incidentally, I do enjoy the fact that I can run this natively on my Mac, rather than having to fire up the Windows VM. There are .bat and .sh versions of all utilities mentioned above.
Phew. This was a very brief introductory post, but I’m looking forward to sharing more detailed information over the next few weeks. Oracle should be getting the documentation on stream shortly, which will help answer some outstanding questions.
If you want to learn more about OAC, check out the blog hop participant posts below! What is a blog hop? A blog hop is a group of bloggers who all get together to blog on a particular topic. We share each others blog posts in an attempt to share a lot of great information in one place. Enjoy!
- Opal Alapat, interRel Consulting
- Stewart Bryson, RedPill Analytics
- Brian Dandeneau, interRel Consulting
- Tim German, Qubix
- Cameron Lackpour, ARC EPM Consulting
- Matt Milella, Oracle
- Glenn Schwartzberg, interRel Consulting
- Summer Watson, interRel Consulting
- Sarah Zumbrum, Oracle
Additionally, Kscope17 in June will include a bunch of sessions on OAC in general, and Essbase Cloud in particular, from such luminaries as Kumar Ramaiyer, John Maloney, Cameron Lackpour, Ronnie Tafoya, and, um, me. See kscope17.com for more details!