The EPM Week In Review: Week Ending May 28, 2016

Clearly people are finishing up their KScope content, because this was a pretty busy week.

Patches and Updates:

FCM has been released.

The June update to PBCS appears to have been delayed until July.  <Insert Angry Face Here>  More on this as it becomes available.

New Blog Posts:

I finally got a blog post in!  Not only that, but I’ll probably have another one next week!  This week began the months-in-the-making Essbase Performance Series!  You might be able to tell by the number of exclamation points used that I am excited about this series…

Jason shows us how to create drillable columns in Drillbridge.  Great post…that was totally on my list to blog about!  Granted, that list is a mile long and growing.

Amit shows us how to use FDMEE, Oracle EBS, and multiple Accounting Entity Groups.  This makes for a cool combination.

Robert has a great post on FDMEE and Hybrid Cloud.  I’ll be referencing this on occasion.

Celvin has a post about his new OTN article on configuring AD security with PBCS.  This is a great article for those considering SSO with PBCS.

KScope16 symposiums have been announced.  You can find the full EPM list here.  I won’t be there Sunday, so I’ll be missing some great content.

Danny Some DBA (sorry…this is part of a blog hop, so I have no idea who this person is), posts about EPM Kscope content of interest.  Danny is actually the Kscope conference chair.

Speaking of a blog hop…Cameron is posting about APEX.  This is another of the many topics I wish I had more time to spend playing with.

KScope must be soon or something, because Gary has a blog post about it too!  This one is specific to the previously mentioned symposiums.

Other News:

Kscope presentations must be uploaded by May 28th (TODAY!!!) or you will lose your free registration and your speaking slot.  They mean it!  Handy links:

Kscope16 Slide Template (PPT)

Instructions on Uploading Your Presentation (PDF)

Required Opening and Closing Slides (PNG)

As of this blog post, I have one uploaded…and one that I should totally consider starting (kidding…I think).

Essbase Performance Series: Part 1 – Introduction

Welcome to the first in a likely never-ending series about Essbase performance.  To be specific, this series will be designed to help understand how the choices we make in Essbase hardware selection affect Essbase performance.  We will attempt to answer questions like Hyper-Threading or not, SSD’s or SAN, Physical or Virtual.  Some of these things we can control, some of them we can’t.  The other benefit of this series will be the ability to justify changes in your organization and environment.  If you have performance issues and IT want’s to know how to fix it, you will have hard facts to give them.

As I started down the path of preparing this series, I wondered why there was so little information on the internet in the way of Essbase benchmarks.  I knew that part of this was that every application is different and has significantly different performance characteristics.  But as I began to build out my supporting environment, I realized something else.  This is a time consuming and very expensive process.  For instance, comparing Physical to Virtual requires hardware that is dedicated to the purpose of benchmarking.  That isn’t something you find at many, if any clients.

As luck would have it, I have been able to put together a lab that allows me the ability to do all of these things.  I have a dedicated server for the purpose of Essbase benchmarking.  This server will go back and forth between physical and virtual and various combinations of the two.  Before we get into the specifics of the hardware that we’ll be using, let’s talk about what we hope to accomplish from a benchmarking perspective.

There are two main areas that we care about that relate to Essbase performance.  First, we have the back-end performance of Essbase calculations.  When I run an agg or a complex calculation, how long does it take?  Second, we have the front-end performance of Essbase retrieves and calculations.  This is a combination of how long end-user queries take to execute and how long user-executed calculations take to complete.  So what will we be testing?

Storage Impact on Back-End Essbase Calculations

We’ll take a look at impact our options in storage have on Essbase calculation performance.  Storage is our slowest bottleneck, so we’ll start here to find the fastest solution that we can use for the next set of benchmarks.  We’ll compare each of our available storage types three ways: a physical Essbase server, a virtual Essbase server using VT-d and direct attached storage, and a virtual Essbase server using data stores.  Here are the storage options we’ll have to work with:

  • Samsung 850 EVO SSD (250GB) on an LSI 9210-8i
  • Four (4) Samsung 850 EVO SSD’s in RAID 0 on an LSI 9265-8i (250GB x 4)
  • Intel 750 NVMe SSD (400GB)
  • Twelve (12) Fujitsu MBA3300RC 15,000 RPM SAS HDD (300GB x 12) in RAID 1, 1+0, and RAID 5

CPU Impact on Back-End Essbase Calculations

Once we have determined our fastest storage option, we can turn our attention our processors.  The main thing that we can change as application owners is the Hyper-Threading settings.  Modern Intel processors found in virtually all Essbase clients support it, but conventional wisdom tells us that this doesn’t work out very well for Essbase.  I would like to know what the cost of this setting is and how we can best work around it.  ESXi (by far the most common hypervisor) even gives us some flexibility with this settings.

Storage Impact on Front-End Essbase Query Performance

This one is a little more difficult.  Back-End calculations are easy to benchmark.  You make a change, you run the calculation, you check the time it took to execute.  Easy.  Front-End performance requires user interaction, and consistent user interaction at that.  So how will we do this?  I can neither afford load runner, nor have the time to attempt to learn this complex tool.  Again, as luck would have it, we have another option.  Our good friends at Accelatis have graciously offered to allow us to use their software to perform consistent front-end benchmarks.

Accelatis has an impressive suite of performance testing products that will allow us to test specific user counts and get query response times so that we can really understand the impact of end-user performance.  I’m very excited to be working with Accelatis.

CPU Impact on Front-End Essbase Query Performance

This is an area where we can start to see more about our processors.  Beyond just Hyper-Threading, which we will still test, we can look at how Essbase is threading across processors and what impact we can have on that.  Again, Accelatis will be key here as we start to understand how we really scale Essbase.

So what does the physical server look like that we are using to do all of this?  Here are the specs:

Processor(s)(2) Intel Xeon E5-2670 @ 2.6 GHz
MotherboardASRock EP2C602-4L/D16
Memory128 GB - (16) Crucial 8 GB ECC Registered DDR3 @ 1600 MHz
ChassisSupermicro CSE-846E16-R1200B
RAID ControllerLSI MegaRAID Internal SAS 9265-8i
Solid State Storage(4) Samsung 850 EVO 250 GB on LSI SAS in RAID 0
Solid State Storage(2) Samsung 850 EVO 250 GB on Intel SATA
Solid State Storage(1) Samsung 850 EVO 250 GB on LSI SAS
NVMe Storage(1) Intel P3605 1.6TB AIC
Hard Drive Storage(12) Fujitsu MBA3300RC 300GB 15000 RPM on LSI SAS in RAID 10
Network AdapterIntel X520-DA2 Dual Port 10 Gbps Network Adapter

You can see specs of the full lab supporting all of the testing here.  And now, because I promised benchmarks, here are a few to start with:

Physical Server, Samsung EVO 850 x 4 in RAID 0 on an LSI 9265-8i



Physical Server, Intel 750 NVMe SSD



Well…that’s fast.  In our next post in the series, we’ll look at benchmarking all of our baseline storage performance for Physical, Virtual with VT-d, and Virtual with Data stores.  This will be our baseline for the next post after that about actual Essbase performance.  In the meantime, I’ll also be working towards getting access to some fast network storage to test that against all of our direct and virtual options.  In the meantime, let’s try out a graph and see how it looks:

CrystalDiskMark 5.0.2 Read Comparison:


CrystalDiskMark 5.0.2 Write Comparison:


The EPM Week In Review: Week Ending May 21, 2016

This week was a little slow, much like last week.  Unfortunately, Oracle didn’t have any new product releases to keep things interesting.

Patches and Updates:


New Blog Posts:

This week I STILL didn’t manage to get a post in.  I promise that will change next week…i have one ready to go, I just want to add one more thing to it.

Vijay shows us how to use Dynamic SQL in ODI.  I don’t know that anything in ODI is simple, as he suggests.

Opal has a quick tip for Oracle cloud users.  Everything you ever wanted to know about changing your password.

Cameron gives us the last in his 15 part series on PBCS.  This one covers batch automation and compares PBCS to on-prem.

Jason shows off a feature in Dodeca that we all hope one days makes it to Planning.  Dependent Selectors!  Come on Oracle, catch up to Tim Tow!

Gary shows us some screenshots of what he thinks might be a next iteration of EAS.  I’ve heard rumors that we will see EAS in Essbase Cloud Servers…and that we won’t.  Maybe we will…kind of?  Who knows as I also hear it has been delayed yet again!

Other News:

Kscope presentations must be uploaded by May 28th or you will lose your free registration and your speaking slot.  They mean it!  Handy links:

Kscope16 Slide Template (PPT)

Instructions on Uploading Your Presentation (PDF)

Required Opening and Closing Slides (PNG)

The EPM Week In Review: Week Ending May 14, 2016

This week we are a bit more timely!  It does however seem that everyone is preparing their 1st drafts for Kscope, as it was a really slow blogging week.  Luckily, Oracle offset that slowness with the release of two new products in the cloud.

Patches and Updates:

Oracle has finally released Enterprise Planning Cloud (formerly ePBCS).  This is PBCS with all of the modules that we’ve had on-prem for a long time.  Except…it isn’t.  They have taken all of our modules and attempted to wizardize them.  More on this soon.

Oracle also released Financial Consolidation and Close Cloud (formerly FCCS).  This is not HFM in the cloud, but an entirely new tool, based on Essbase that can be used for consolidations.

New Blog Posts:

I didn’t manage to get a post in this week as I’ve been heads down on both a metric ton of client work, and laying the foundation for a new blog series.  More on this next week…

Gary has a pair of posts this week.  First, a post on using VBA to get a member formula.  Second, a look at the new Financial Reports patch that just release (.701).

Christian gives us some code to get rid of an orphaned EPMA application.  I may use the code to delete EPMA applications out of principle, orphaned or not.

Speaking of new cloud releases from Oracle, Opal has some great thoughts about these releases.

Cameron has two posts, one by Cameron about LCM in PBCS and one by guest blogger Chris Rothermel about data loads in PBCS.

Like I said…slow week.  Shoot me a note if I missed a post!

Other News:

This week, we had another North Texas Oracle EPM Lunch.  We had nine people attend and we had a great time.  We are going to try to make this a bi-monthly meet-up.  Join the meet-up group to make sure that you hear about the next one!

The EPM Week In Review: Week Ending May 7, 2016

Welcome to a very late edition of the EPM Week in Review.  Plenty of great stuff this week…sorry for the delay!

Patches and Updates:

FR has been released!  It brings with it features from the cloud like the Web Reporting Studio and charts that don’t look like they came from Lotus 1-2-3.  I’m excited to get this one installed next week

HFM has been released.  While not as exciting as the FR patch, it does fix a lot of bugs and the security issues…if you follow the instructions very, very carefully.

Neither of these updates has made it to the Proactive Support blog.  This is why everyone in our space should have a Twitter account.

New Blog Posts:

This week I posted about Drill-Through in PBCS and Hyperion Planning Without FDMEE.

Gary covers the topic of limiting users in Smart View.  He covers the different products and their own governors beyond just the APS governors for Smart View.

Amit gives us his summary of the upcoming June PBCS update.  This monster of an update looks like it removes many of the final technical challenges associated with going to the cloud.

Opal also has a write-up on the new PBCS update.  Is it June yet?

Harry has updated his cubeSavvy beta release.  He has included double-click drilling and the ability to cut people off (license file!).  And another piece of software I haven’t had a chance to install yet.  I need more hours in my day.

Query limits are popular this week.  Tim has not one, but two blog posts on this topic as it relates specifically to Essbase.  His second post is more of a follow-up set of questions to himself.

Keith tells us about working with temporary folders in ODI.

DEVEPM has a 2 minute video giving us a sneak peek at their upcoming Kscope presentation ODI and the EPM stack.

Henri shows off the new FR Web Studio…On-Prem!  Many of us have used this new tool on the cloud, but it is great to see the cloud functionality trickle down to on-prem.

Christian shows us a shortcut to importing financial reports in bulk.  I’ll be trying this out on my next big FR move.

Cameron has a pair of posts as he rounds out his series on PBCS.  First he covers data loads and then he covers calculations.  Good stuff as always.

Other News:

ODTUG has a new award.  The ODTUG Innovation Award.  Good luck to all you innovators out there!

There is a North Texas EPM Lunch schedule for Friday, May 16, 2016 at Seasons 52 in Plano.  We’ll be there from 11AM to 1PM.  Be sure to RSVP if you can make it out:


Drill-Through in PBCS and Hyperion Planning Without FDMEE

While recently debugging an issue with FDMEE, I needed to test drill-through in Hyperion Planning without using FDMEE.  But wait…can you even do that?  I had always planned on showing how to use Drillbridge with Hyperion Planning, but as I was talking with Jason, he mentioned we could even get it working in PBCS.  So how does this work?

Let’s start easy with Hyperion Planning.  If you happen to read Francisco’s blog, you may have already read this post about FDMEE drill-through.  It actually tells us that FDMEE just uses the Essbase drill-through definitions.  This happens to be the exact functionality that Drillbridge uses.  As it happens, if we set up drill-through on an Essbase cube that supports Planning, it just works.  See…easy.  But let’s try it with the Vision cube:

I’ll spare all of the details of the Drillbridge setup, but we’ll cover a few specifics.  First we’ll set up a test deployment specification:


So you can use all of your regular functions here, but I wanted to keep it super simple for testing purposes.  Next, we need to setup a connection to the Essbase database:


Once we deploy the report, we can take a peek at what it actually produces in Essbase:


So what happens in Planning?

Here we can see that the cell is enabled for drill-through:


When we right-click we have to click drill-through to see our choices:

PlanningPBCSDrillthrough05Once we click Drill Through we should see a list of all of the valid reports for that intersection.  Just like in Essbase, we can see multiple reports if multiple reports are defined:


Finally, we can click on the link and we are redirected to our report in Drillbridge:


So there we have it…drill-through without the use of FDMEE.  The coolest part is that this works everywhere in Planning.  Planning Web Forms, Planning Ad Hoc Grids, Planning Web Forms in Smart View, Planning Ad Hoc in Smart View, and Financial Reports.

But what about PBCS?  As it happens, PBCS works basically the same way as Planning.  The difference is, we can’t directly deploy the drill-through definition through Drillbridge.  So how do we do this?  If we look at the drill-through region defined, all we really need to do is create one manually that will point back to Drillbridge.  We’ll fire up PBCS and find a way…

Without EAS, Oracle has moved many of the features we would normally find there to Calculation Manager.  Open calculation manager:


Click on the small Essbase Properties icon:


Find the application to which you wish to add drill-through and click on Drill Through Definitions:

PlanningPBCSDrillthrough10If the database isn’t started, you will get a dialog like this:


Once the Drill Through Definitions dialog is displayed, click the plus sign:


We’ll start by entering a name and the XML contents.  I copied and pasted my on-prem XML from EAS.  Then click Add Region:


Next we add our region (copy and pasted from EAS, changing year to FY14 for PBCS Vision) and click Save:


Now let’s go see what happened:PlanningPBCSDrillthrough15

Enabled for drill-through!  Now let’s right-click and take it for a spin:


And let’s click Drill Through:


And just like on-prem we see our drill-through report name.  Now let’s click on it and…oh no!


Okay, that might be a little dramatic.  The one downside to this approach is that we are leaving PBCS to come back to an on-prem server.  So it let’s us know that this might not be secure.  Let’s just click continue and see our data!


And there we have it…working drill-through to on-prem using something other than FDMEE.  Our very own Hybrid approach.  The one thing to note from above is that the circled area will not function properly.  I’m re-using a report I created for my on-prem version of Vision, so it does show numbers, but in PBCS you will need to turn off anything that references the API.  This also means that upper level drill-through won’t work…yet.  The REST API does give us what we need to enable upper-level drill-through, so I expect this feature to be added in the future.

At the end of the day, we have basically three options for Planning drill-through:



  • 100% Oracle Product with Oracle Support
  • Standard integration tool for the EPM stack
  • Loads data, audits loads, and provides drill-through


  • Requires an additional license from Oracle
  • Does not support drill-through above level 0
  • Can’t bolt it onto an existing application without reloading data
  • The drillable content must exist in FDMEE
  • No ability to change the way the drill-through looks



  • Bolts onto any existing application
  • Insanely fast time to implement
  • Allows for full customization of the drill-through report
  • Data does not technically have to live in Essbase
  • Drill at any level, not just level 0


  • Not an Oracle product, though supported by Applied OLAP, this can be a deal-breaker for some companies

Custom Drill-through


  • You get total control over how you enable drill-through


  • You get to do a mega-ton more work

For Planning, I think Drillbridge is a great alternative to FDMEE.  This is especially true for companies that don’t actually own FDMEE.  And for those of you that need upper-level drill-through, it really is the only choice short of hiring a developer to build you a custom solution.

PBCS is a little bit trickier.  There are still three options available:



  • 100% Oracle Product with Oracle Support
  • Standard integration tool for the EPM stack and you likely loaded data to PBCS using it
  • Loads data, audits loads, and provides drill-through
  • You can use on-premise now, or the built-in version


  • Does not support drill-through above level 0
  • The drillable content must exist in FDMEE
  • FDMEE on PBCS has a limited number of fields, and those fields have a character limit
  • No ability to change the way the drill-through looks



  • Bolts onto any existing application
  • Insanely fast time to implement
  • Allows for full customization of the drill-through report
  • Data does not technically have to live in PBCS


  • Not an Oracle product, though supported by Applied OLAP, this can be a deal-breaker for some companies
  • Does not yet support upper-level drill-through (more on this later)

Custom Drill-through


  • You get total control over how you enable drill-through


  • You get to do a mega-ton more work

For PBCS, right now, I would generally stick with FDMEE.  Most of us are using it to load data into Planning anyway, so adding additional detail to the import format for drill-through isn’t much in the way of additional work.  However…if you need upper-level drill-through, you are out of luck…for now.  I fully expect that we will see a future release of Drillbridge that includes REST API integration.  This means that at a minimum, it should allow for upper-level drill-through.  When that happens…Drillbridge becomes a more powerful tool for drill-through for PBCS than even FDMEE.