I’ve spent a lot of time lately on homelab topics, so I thought I would take a break and put together a post on an EPM topic! Today we’ll be talking about PBCS backups.
Why Do We Need PBCS Backups
You might be thinking, “Why do I need PBCS backups when Oracle does that for me?” That’s an excellent question. The problem is that while Oracle does perform nightly backups of PBCS, they overwrite that backup each night. So at any given time I only have one backup. To make things even worse, Oracle has a size limit on your PBCS instance. That limit is 150GB. This means that even if we had multiple backups on our pod, we’ll eventually start losing them to the data retention policies.
So what do we do? We generate a new backup every night and download it to a local server. The good news is that you almost certainly have a local server already running EPM Automate. EPM Automate is the automation tool for Oracle’s EPM Cloud suite. You can use EPM Automate to load data, execute calculations, update meta-data, and…perform backups. So, we’ve established that we likely need more than a single night of backups, but how many do we need? This will depend on a few things like the size of your applications and the frequency of change. For our example, we will keep 30 days of daily backups.
Batch vs. PowerShell
Now that we have determined what we are backing up and how many backups we need to keep, we need to move on to actually performing the backups. With EPM Automate, we have two commonly used options. First, we have the old-school method of a batch file. Batch files are great because they just work and you can find a ton of information on the web about how to do things. Batch are, however, very limited in their ability to do things like e-mail notifications and remote calls without external tools. That brings us to PowerShell. PowerShell is essentially a batch that has the full set of .NET programming capability along with other goodies not directly from .NET. What does that mean exactly? That means there is very little I can’t do in PowerShell.
Directory Configuration
Before we configure anything, we need to get a folder structure put together to support scripting, logging, and the actual backup files. You may already have a structure for your automation processes, but for our example, it will look something like this:
- C:\Oracle
- C:\Oracle\Automation
- C:\Oracle\Automation\Backup
- C:\Oracle\Automation\Log
EPM Automate Configuration
EPM Automate is a great tool, but we do need to perform a little bit of setup to get going. For instance, while EPM Automate supports plain text passwords, that wouldn’t pass muster with most IT security groups. So before we get into PowerShell, let’s encrypt our password. This is a fairly easy process. We’ll start up a command prompt and change directory to our EPM Automate bin directory:
cd\
cd Oracle\EPM_Automate\bin
Once we are in the right directory, we can encrypt our password:
epmautomate.bat encrypt YourPasswordGoesHere PickYourKey c:\Oracle\Automation\password.epw
Here are the parameters:
- Command – the command EPM Automate will execute
- Password – the password of the account you plan to use
- Key – you specify anything you want to use to encrypt the password
- Password File – The full path and file name of the password file that will be generated
- c:\Oracle\Automation\password.epw
Once we execute the command, we should have our password file so that we can continue. It should look something like this:

Backing Up PBCS with PowerShell
For our first part of this mini-series, we’ll stick with just a basic backup that deletes older backups. In our next part of the series, we’ll go deeper into error handling and notifications. Here’s the code..
Path Variables
#Path Variables
$EpmAutomatePath = "C:\Oracle\EPM_Automate\bin\epmautomate.bat"
$AutomationPath = "C:\Oracle\Automation"
$LogPath = "C:\Oracle\Automation\Log"
$BackupPath = "C:\Oracle\Automation\Backup"
We’ll start by defining our path variables. This will include paths to EPM Automate, our main automation directory, our log path, and our backup path.
Date Variables
#Date Variables
$DaysToKeep = "-30"
$CurrentDate = Get-Date
$DatetoDelete = $CurrentDate.AddDays($DaysToKeep)
$TimeStamp = Get-Date -format "yyyyMMddHHmm"
$LogFileName = "Backup" + $TimeStamp + ".log"
Next we’ll define all of our data related variables. This includes our days to keep (which is negative on purpose as we are going back in time), our current date, the math that gets us back to our deletion period, a timestamp that will be used for various things, and finally our log file name based on that timestamp.
PBCS Variables
#PBCS Variables
$PBCSdomain = "yourdomain"
$PBCSurl = "https://usaadmin-test-yourdomain.pbcs.us2.oraclecloud.com"
$PBCSuser = "yourusername"
$PBCSpass = "c:\Oracle\Automation\password.epw"
Now we need to set our PBCS variables. This will include our domain, the URL to our instance of PBCS, the username we’ll use to log in, and the path to the password file that we just finished generating.
Snapshot Variables
#Snapshot Variables
$PBCSExportName = "Artifact Snapshot"
$PBCSExportDownloadName = $PBCSExportName + ".zip"
$PBCSExportRename = $PBCSExportName + $TimeStamp + ".zip"
We’re nearing the end of variables as we define our snapshot specific variables. These variables will tell us the name of our export, the name of the file that we are downloading based on that name, and the new name of our snapshot that will include our timestamp.
Start Logging
#Start Logging
Start-Transcript -path $LogPath\$LogFileName
I like to log everything so that if something does go wrong, we have a chance to figure it out after the fact. This uses the combination of our log path and log file name variables.
Log Into PBCS
#Log into PBCS
Write-Host ([System.String]::Format("Login to source: {0}", [System.DateTime]::Now))
&$EpmAutomatePath "login" $PBCSuser $PBCSpass $PBCSurl $PBCSdomain
We can finally log into PBCS! We’ll start by displaying our action and the current system time. This way we can see how long things take when we look at the log file. We’ll then issue the login command using all of our variables.
Create the Snapshot
#Create PBCS snapshot
Write-Host ([System.String]::Format("Export snapshot from source: {0}", [System.DateTime]::Now))
&$EpmAutomatePath exportsnapshot $PBCSExportName
Again we’ll display our action and current system time. We then kick off the snapshot process. We do this because we want to ensure that we have the most recent snapshot for our archiving purposes.
Download the Snapshot
#Download PBCS snapshot
Write-Host ([System.String]::Format("Download snapshot from source: {0}", [System.DateTime]::Now))
&$EpmAutomatePath downloadfile $PBCSExportName
Once the snapshot has been created, we’ll move on to downloading the snapshot after we display our action and current system time.
Archive the Snapshot
#Rename the file using the timestamp and move the file to the backup path
Write-Host ([System.String]::Format("Rename downloaded file: {0}", [System.DateTime]::Now))
Move-Item $AutomationPath\$PBCSExportDownloadName $BackupPath\$PBCSExportRename
Once the file has been downloaded, we can then archive the snapshot to our backup folder as we rename the file.
Delete Old Snapshots
#Delete snapshots older than $DaysToKeep
Write-Host ([System.String]::Format("Delete old snapshots: {0}", [System.DateTime]::Now))
Get-ChildItem $BackupPath -Recurse | Where-Object { $_.LastWriteTime -lt $DatetoDelete } | Remove-Item
Now that we have everything archived, we just need to delete anything older than our DateToDelete variable.
Log Out of PBCS
#Log out of PBCS
Write-Host ([System.String]::Format("Logout of source: {0}", [System.DateTime]::Now))
&$EpmAutomatePath "logout"
We’re almost done and we can now log out of PBCS.
Stop Logging
#Stop Logging
Stop-Transcript
Now that we have completed our process, we’ll stop logging
The Whole Shebang
#Path Variables
$EpmAutomatePath = "C:\Oracle\EPM_Automate\bin\epmautomate.bat"
$AutomationPath = "C:\Oracle\Automation"
$LogPath = "C:\Oracle\Automation\Log"
$BackupPath = "C:\Oracle\Automation\Backup"
#Date Variables
$DaysToKeep = "-30"
$CurrentDate = Get-Date
$DatetoDelete = $CurrentDate.AddDays($DaysToKeep)
$TimeStamp = Get-Date -format "yyyyMMddHHmm"
$LogFileName = "Backup" + $TimeStamp + ".log"
#PBCS Variables
$PBCSdomain = "yourdomain"
$PBCSurl = "https://usaadmin-test-yourdomain.pbcs.us2.oraclecloud.com"
$PBCSuser = "yourusername"
$PBCSpass = "c:\Oracle\Automation\password.epw"
#Snapshot Variables
$PBCSExportName = "Artifact Snapshot"
$PBCSExportDownloadName = $PBCSExportName + ".zip"
$PBCSExportRename = $PBCSExportName + $TimeStamp + ".zip"
#Start Logging
Start-Transcript -path $LogPath\$LogFileName
#Log into PBCS
Write-Host ([System.String]::Format("Login to source: {0}", [System.DateTime]::Now))
&$EpmAutomatePath "login" $PBCSuser $PBCSpass $PBCSurl $PBCSdomain
#Create PBCS snapshot
Write-Host ([System.String]::Format("Export snapshot from source: {0}", [System.DateTime]::Now))
&$EpmAutomatePath exportsnapshot $PBCSExportName
#Download PBCS snapshot
Write-Host ([System.String]::Format("Download snapshot from source: {0}", [System.DateTime]::Now))
&$EpmAutomatePath downloadfile $PBCSExportName
#Rename the file using the timestamp and move the file to the backup path
Write-Host ([System.String]::Format("Rename downloaded file: {0}", [System.DateTime]::Now))
Move-Item $AutomationPath\$PBCSExportDownloadName $BackupPath\$PBCSExportRename
#Delete snapshots older than $DaysToKeep
Write-Host ([System.String]::Format("Delete old snapshots: {0}", [System.DateTime]::Now))
Get-ChildItem $BackupPath -Recurse | Where-Object { $_.LastWriteTime -lt $DatetoDelete } | Remove-Item
#Log out of PBCS
Write-Host ([System.String]::Format("Logout of source: {0}", [System.DateTime]::Now))
&$EpmAutomatePath "logout"
#Stop Logging
Stop-Transcript
The Results
Once you execute the PowerShell script, you should see something like this:

Conclusion
There we have it…a full process for backing up your PBCS instance. The last step would be to set up a scheduled task to execute once a day avoiding your maintenance window.
Brian Marshall
August 29, 2018
There are no less than three blog posts about running a batch script from Workspace floating around the internet. I believe the first originated from Celvin here. While this method works great for executing a batch, you are still stuck with a batch. Not only that, but if you update that batch, you have to go through the process of replacing your existing batch. This sounds easy, but if you want to keep your execution history, it isn’t. Today we’ll use a slightly modified version of what Celvin put together all those years ago. Instead of stopping with a batch file, we’ll execute PowerShell from Workspace.
Introduction to PowerShell
In short, PowerShell is a powerful shell built into most modern versions of Windows (both desktop and server) meant to provide functionality far beyond your standard batch script. Imagine a world where you can combine all of the VBScript that you’ve linked together with your batch scripts. PowerShell is that world. PowerShell is packed full of scripting capabilities that make things like sending e-mails no longer require anything external (except a mail server of course). Basically, you have the power of .NET in batch form.
First an Upgrade
We’ll start out with a basic batch, but if you look around at all of the posts available, none of them seem to be for 11.1.2.4. So, let’s take his steps and at least give them an upgrade to 11.1.2.4. Next, we’ll extend the functionality beyond basic batch files and into PowerShell. First…the upgrade.
Generic Job Applications
I’ll try to provide a little context along with my step-by-step instructions. You are probably thinking…what is a Generic Job Application? Well, that’s the first thing we create. Essentially we are telling Workspace how to execute a batch file. To execute a batch file, we’ll use cmd.exe…just like we would in Windows. Start by clicking Administer, then Reporting Settings, and finally Generic Job Applications:

This will bring up a relatively empty screen. Mine just has BrioQuery (for those of you that remember what that means…I got a laugh). To create a new Generic Job Application, we have to right-click pretty much anywhere and click Create new Generic Application:

For product name, we’ll enter Run_Batch (or a name of your choosing). Next we select a product host which will be your R&A server. Command template tells Workspace how to call the program in question. In our case we want to call the program ($PROGRAM) followed by any parameters we wish to define ($PARAMS). All combined, our command template should read $PROGRAM $PARAMS. Finally we have our Executable. This will be what Workspace uses to execute our future job. In our case, as preiovusly mentioned, this will be the full path to cmd.exe (%WINDIR%\System32\cmd.exe). We’ll click OK and then we can move on to our actual batch file:

The Batch
Now that we have something to execute our job, we need…our job. In this case we’ll use a very simple batch script with just one line. We’ll start by creating this batch script. The code I used is very simple…call PowerShell script:
%WINDIR%\system32\WindowsPowerShell\v1.0\powershell.exe e:\data\PowerShellTest.ps1
So, why don’t I just use my batch file and perform all of my tasks? Simple…PowerShell is unquestionably superior to a batch file. And if that simple reason isn’t enough, this method also let’s us separate the job we are about to create from the actual code we have to maintain in PowerShell. So rather than making changes and having to figure out how to swap out the updated batch, we have this simple batch that calls something else on the file system of the reporting server. I’ve saved my code as BatchTest.bat and now I’m ready to create my job.
The Job
We’ll now import our batch file as a job. To do this we’ll go to Explore, find a folder (or create a folder) that we will secure for only people that should be allowed to execute our batch process. Open that folder, right-click, and click Import and then File As Job…:

We’ll now select our file (BatchTest.bat) and then give our rule a name (PowerShellTest). Be sure to check Import as Generic Job and click Next:

Now we come full circle as we select Run_Batch for our Job Factory Application. Finally, we’ll click finish and we’re basically done:

Simple PowerShell from Workspace
Wait! We’re not actually done! But we are done in Workspace, with the exception of actually testing it out. But before we test it out, we have to go create our PowerShell file. I’m going to start with a very simple script that simple writes the username currently executing PowerShell to the screen. This let’s us do a few things. First, it let’s you validate the account used to run PowerShell. This is always handy to know for permissions issues. Second, it let’s you make sure that we still get the output of our PowerShell script inside of Workspace. Here’s the code:
$User = [System.Security.Principal.WindowsIdentity]::GetCurrent().Name
Write-Output $User
Now we need to make sure we put this file in the right place. If we go back up to the very first step in this entire process, we select our server. This is the server that we need to place this file on. The reference in our batch file above will be to a path on that system. In my case, I need to place the file into e:\data on my HyperionRP24 server:

Give it a Shot
With that, we should be able to test our batch which will execute PowerShell from Workspace. We’ll go to Explore and find our uploaded job, right-click, and click Run Job:

Now we have the single option of output directory. This is where the user selects where to place the log file of our activities essentially. I choose the logs directory that I created:

If all goes according to plan, we should see a username:

As we can see, my PowerShell script was executed by Hyperion\hypservice which makes sense as that’s my Hyperion service used to run all of the Hyperion services.
Now the Fun
We have successfully recreated Celvin’s process in 11.1.2.4. Now we are ready to extend his process further with PowerShell. We already have our job referencing our PowerShell script stored on the server, so anything we choose to do from here on out can be independent of Hyperion. And again, running PowerShell from Workspace gives us so much more functionality, we may as well try some of it out.
One Server or Many?
In most Hyperion environments, you have more than one server. If you have Essbase, you probably still have a foundation server. If you have Planning, you might have Planning, Essbase, and Foundation on three separate machines. The list of servers goes on and on in some environments. In my homelab, I have separate virtual machines for all of the major components. I did this to try to reflect what I see at most clients. The downside is that I don’t have everything installed on every server. For instance, I don’t have MaxL on my Reporting Server. I also don’t have the Outline Load Utility on my Reporting Server. So rather than trying to install all of those things on my Reporting Server, some of which isn’t even supporting, why not take advantage of PowerShell. PowerShell has the built-in capability to execute commands on remote servers.
Security First
Let’s get started by putting our security hat on. We need to execute a command remotely. To do so, we need to provide login credentials for that server. We generally don’t want to do this in plain text as somebody in IT will throw a flag on the play. So let’s fire up PowerShell on our reporting server and encrypt our password into a file using this command:
read-host -prompt "Password?" | ConvertTo-SecureString -AsPlainText -Force | ConvertFrom-SecureString | Out-File "PasswordFile.pass"
This command requires that you type in your password which is then converted to a SecureString and written to a file. It’s important to note that this encrypted password will only work on the server that you use to perform the encryption. Here’s what this should look like:

If we look at the results, we should have an encrypted password:

Now let’s build our PowerShell script and see how we use this password.
Executing Remotely
I’ll start with my code which executes another PowerShell command on our remote Essbase Windows Server:
###############################################################################
#Created By: Brian Marshall
#Created Date: 7/19/2018
#Purpose: Sample PowerShell Script for EPMMarshall.com
###############################################################################
###############################################################################
#Variable Assignment
###############################################################################
#Define the username that we will log into the remote server
$PowerShellUsername = "Hyperion\hypservice"
#Define the password file that we just created
$PowerShellPasswordFile = "E:\Data\PasswordFile.pass"
#Define the server name of the Essbase server that we will be logging into remotely
$EssbaseComputerName = "HyperionES24V"
#Define the command we will be remotely executing (we'll create this shortly)
$EssbaseCommand = {E:\Data\RemoteSample\RemoteSample.ps1}
###############################################################################
#Create Credential for Remote Session
###############################################################################
$PowerShellCredential=New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $PowerShellUsername, (Get-Content $PowerShellPasswordFile | ConvertTo-SecureString)
###############################################################################
#Create Remote Session Using Credential
###############################################################################
$EssbaseSession = New-PSSession -ComputerName $EssbaseComputerName -credential $PowerShellCredential
###############################################################################
#Invoke the Remote Job
###############################################################################
$EssbaseJob = Invoke-Command -Session $EssbaseSession -Scriptblock $EssbaseCommand 4>&1
echo $EssbaseJob
###############################################################################
#Close the Remote Session
###############################################################################
Remove-PSSession -Session $EssbaseSession
Basically we assign all of our variables, including the use of our encrypted password. Then we create a credential using those variables. We then use that credential to create a remote session on our target Essbase Windows Server. Next we can execute our remote command and write out the results to the screen. Finally we close out our remote connection. But wait…what about our remote command?
Get Our Remote Server Ready
Before we can actually remotely execute on a server, we need to start up PowerShell on that remove server and enable remote connectivity in PowerShell. So…log into your remote server and start PowerShell, and execute this command:
Enable-PSRemoting -Force
If all goes well, it should look like this:

If all doesn’t go well, make sure that you started PowerShell as an Administrator. Now we need to create our MaxL script and our PowerShell script that will be remotely executed.
The MaxL
First we need to build a simple MaxL script to test things out. I will simply log in and out of my Essbase server:
login $1 identified by $2 on $3;
logout;
The PowerShell
Now we need a PowerShell script to execute the MaxL script:
###############################################################################
#Created By: Brian Marshall
#Created Date: 7/19/2018
#Purpose: Sample PowerShell Script for EPMMarshall.com
###############################################################################
###############################################################################
#Variable Assignment
###############################################################################
$MaxLPath = "E:\Oracle\Middleware\user_projects\Essbase1\EssbaseServer\essbaseserver1\bin"
$MaxLUsername = "admin"
$MaxLPassword = "myadminpassword"
$MaxLServer = "hyperiones24v"
###############################################################################
#MaxL Execution
###############################################################################
& $MaxLPath\StartMaxL.bat E:\Data\RemoteSample\RemoteSample.msh $MaxLUsername $MaxLPassword $MaxLServer
This is as basic as we can make our script. We define our variables around usernames and servers and then we execute our MaxL file that logs in and out.
Test It First
Now that we have that built, let’s test it from the Essbase Windows Server first. Just fire up PowerShell and go to the directory where you file exists and execute it:

Assuming that works, now let’s test the remote execution from our reporting server:

Looking good so far.. Now let’s head back to Workspace to see if we are done:

Conclusion
That’s it! We have officially executed a PowerShell script which remotely executes a PowerShell script which executes a MaxL script…from Workspace. And the best part is that we get to see all of the results from Workspace and the logs are stored there until we delete them. We can further extend this to do things like load dimensions using the Outline Load Utility or using PowerShell to send e-mail confirmations. The sky is the limit with PowerShell!
Brian Marshall
July 19, 2018
If you attended my recent presentation at Kscope18, I covered this topic and provided a live demonstration of MDXDataCopy. MDXDataCopy provides an excellent method for creating functionality similar to that of Smart Push in PBCS. While my presentation has all of the code that you need to get started, not everyone likes getting things like this out of a PowerPoint and the PowerPoint doesn’t provide 100% of the context that delivering the presentation provides.
Smart Push
In case you have no idea what I’m talking about, Smart Push provides the ability to push data from one cube to another upon form save. This means that I can do a push from BSO to an ASO reporting cube AND map the data at the same time. You can find more information here provided in the Oracle PBCS docs. This is one of the features we’ve been waiting for in On-Premise for a long time. I’ve been fortunate enough to implement this functionality at a couple of client that can’t go to the cloud yet. Let’s see how this is done.
MDXDataCopy
MDXDataCopy is one of the many, many functions included with Calculation Manager. These are essentially CDF’s that are registered with Essbase. As the name implies, it simply uses MDX queries pull data from the source cube and then map it into the target cube. The cool part about this is that it works with ASO perfectly. But, as with many things Oracle, especially on-premise, the documentation isn’t very good. Before we can use MDXDataCopy, we first have some setup to do:
- Generate a CalcMgr encyrption key
- Encrypt your username using that key
- Encrypt your password using that key
Please note that the encryption process we are going through is similar to what we do in MaxL, yet completely different and separate. Why would we want all of our encryption to be consistent anyway? Let’s get started with our encrypting.
Generate Encryption Key
As I mentioned earlier, this is not the same process that we use to encrypt usernames and passwords with MaxL, so go ahead and set your encrypted MaxL processes and ideas to the side before we get started. Next, log into the server where Calculation Manager is installed. For most of us, this will be where Foundation Services happens to also be installed. First we’ll make sure that the Java bin folder is in the path, then we’ll change to our lib directory that contains calcmgrCmdLine.jar, and finally we’ll generate our key:
path e:\Oracle\Middleware\jdk160_35\bin
cd Oracle\Middleware\EPMSystem11R1\common\calcmgr\11.1.2.0\lib
java -jar calcmgrCmdLine.jar –gk
This should generate a key:

We’ll copy and paste that key so that we have a copy. We’ll also need it for our next two commands.
Encrypt Your Username and Password
Now that we have our key, we should be ready to encrypt our username and then our password. Here’s the command to encrypt using the key we just generated (obviously your key will be different):
java -jar calcmgrCmdLine.jar -encrypt -key HQMvim5GrSYox7S9bR8jSx admin
java -jar calcmgrCmdLine.jar -encrypt -key HQMvim5GrSYox7S9bR8jSx GetYourOwnPassword
This will produce two keys for us to again copy and paste somewhere so that we can reference them in our calculation script or business rule:

Now that we have everything we need from our calculation manager server, we can log out and continue on.
Vision
While not as popular as Sample Basic, the demo application that Hyperion Planning (and PBCS) comes with is great. The application is named Vision and it comes with three BSO Plan Types ready to go. What it doesn’t come with is an ASO Plan Type. I won’t go through the steps here, but I basically created a new ASO Plan Type and added enough members to make my demonstration work. Here are the important parts that we care about (the source and target cubes):

Now we need a form so that we have something to attach to. I created two forms, one for the source data entry and one to test and verify that the data successfully copied to the target cube. Our source BSO cube form looks like this:

Could it get more basic? I think not. And then for good measure, we have a matching form for the ASO target cube:

Still basic…exactly the same as our BSO form. That’s it for changes to our Planning application for now.
Calculation Script
Now that we have our application ready, we can start by building a basic (I’m big on basic today) calculation script to get MDXDataCopy working. Before we get to building the script, let’s take a look at the parameters for our function:
- Key that we just generated
- Username that we just encrypted
- Password that we just encrypted
- Source Essbase Application
- Source Essbase Database
- Target Essbase Application
- Target Essbase Database
- MDX column definition
- MDX row definition
- Source mapping
- Target mapping
- POV for any dimensions in the target, but not the source
- Number of rows to commit
- Log file path
Somewhere buried in that many parameters you might be able to find the meaning of life. Let’s put this to practical use in our calculation script:
RUNJAVA com.hyperion.calcmgr.common.cdf.MDXDataCopy
"HQMvim5GrSYox7S9bR8jSx"
"PnfoEFzjH4P37KrZiNCgd0TMRGSxWoFhbGFJLaP0K72mSoZMCz2ajF9TePp751Dv"
"D44Yplx+Mlj6P2XhGfwvIw4GWHQ5tWOytksR5bToq126xNoPYxWGe3KGlPd56oZ8"
"VisionM"
"Plan1"
"VMASO"
"VMASO"
"{[Jul]}"
"CrossJoin({[No Account]},CrossJoin({[FY16]},CrossJoin({[Forecast]},CrossJoin({[Working]},CrossJoin({[No Entity]},{[No Product]})))))"
""
""
""
"-1"
"e:\\mdxdatacopy.log";
Let’s run down the values used for our parameters:
- HQMvim5GrSYox7S9bR8jSx (Key that we just generated)
- PnfoEFzjH4P37KrZiNCgd0TMRGSxWoFhbGFJLaP0K72mSoZMCz2ajF9TePp751Dv (Username that we just encrypted)
- D44Yplx+Mlj6P2XhGfwvIw4GWHQ5tWOytksR5bToq126xNoPYxWGe3KGlPd56oZ8 (Password that we just encrypted)
- VisionM (Source Essbase Application)
- Plan1 (Source Essbase Database)
- VMASO (Target Essbase Application)
- VMASO (Target Essbase Database)
- {[Jul]} (MDX column definition, in this case just the single member from our form)
- CrossJoin({[No Account]},CrossJoin({[FY16]},CrossJoin({[Forecast]},CrossJoin({[Working]},CrossJoin({[No Entity]},{[No Product]}))))) (MDX row definition, in this case it requires a series of nested crossjoin functions to ensure that all dimensions are represented in either the rows or the columns)
- Blank (Source mapping which is left blank as the two cubes are exactly the same)
- Also Blank (Target mapping which is left blank as the two cubes are exactly the same)
- Also Blank (POV for any dimensions in the target, but not the source which is left blank as the two cubes are exactly the same)
- -1 (Number of rows to commit which is this case is essentially set to commit everything all at once)
- e:\\mdxdatacopy.log (Log file path where we will verify that the data copy actually executed)
The log file is of particular importance as the script will execute with success regardless of the actual result of the script. This means that especially for testing purposes we need to check the file to verify that the copy actually occurred. We’ll have to log into our Essbase server and open the file that we specified. If everything went according to plan, it should look like this:

This gives us quite a bit of information:
- The query that was generated based on our row and column specifications
- The user that was used to execute the query
- The source and target applications and databases
- The rows to commit
- The query and copy execution times
- And the actual data that was copied
If you have an error, it will show up in this file as well. We can see that our copy was successful. For my demo at Kscope18, I just attached this calculation script to the form. This works and shows us the data movement using the two forms. Let’s go back to Vision and give it a go.
Back to Vision
The last step to making this fully functional is to attach our newly created calculation script to our form. Notice that we’ve added the calculation script and set it to run on save:

Now we can test it out. Let’s change our data:

Once we save the data, we should see it execute the script:

Now we can open our ASO form and we should see the same data:

The numbers match! Let’s check the log file just to be safe:

The copy looks good here, as expected. Our numbers did match after all.
Conclusion
Obviously this is a proof of concept. To make this production ready, you would likely want to use a business rule so that you can get context from the form for your data copy. There are however some limitations compared to PBCS. For instance, I can get context for anything that is a variable or a form selection in the page, but I can’t get context from the grid itself. So I need to know what my rows and columns are and hard-code that. You could use some variables for some of this, but at the end of the day, you may just need a script or rule for each form that you with to enable Smart Push on. Not exactly the most elegant solution, but not terrible either. After all, how often do your forms really change?
Brian Marshall
July 8, 2018
So…I posted my presentations here, but then forgot to update them on the Kscope18 site. So, if you prefer to get them from the Kscope18 site, they should now be updated to the same version as below. Also, there was another Kscope18 recap, so I thought I would mention here as well.
Kscope18 Recaps
Kscope18 Presentations
And just in case you would rather get them from the US-Analytics site…here they are:
I’ll say that this presentation left me a little unnerved. I’ve never had a presentation with so few questions. We had questions at the end, but very few if any during the presentation.
This presentation was actually sited as a “featured presentation” by ODTUG. So…no pressure. I actually felt much better about this presentation simply because there were a ton of questions. And they were really, really great questions.
Wrap Up
Overall, Kscope18 was a great experience, as always. This was my 9th Kscope in a row. I am really hoping to have a presentation selected next year to make it an even 10 years! So…hopefully I’ll see everyone next year in Seattle!

Brian Marshall
June 30, 2018
As it does every year, Kscope18 has come and gone in a blur. It’s been quite some time since my last post, something I hope to change now that Kscope18 has concluded. This year, as has become the unfortunate trend for me, was another quick trip. I flew in on the day of my first presentation and out the very next day of my last presentation. I was fortunate enough to spend some time with old friends and people did show up to my presentations, so I’m calling this year a success! While I didn’t get to spend much time at the conference, many of you did, and several of you were kind enough to put together a real recap.
Kscope18 Recaps
Kscope18 Presentations
This year I was honored to have two presentations. You can download them from either the US-Analytics website or the Kscope website:
I’ll say that this presentation left me a little unnerved. I’ve never had a presentation with so few questions. We had questions at the end, but very few if any during the presentation.
This presentation was actually sited as a “featured presentation” by ODTUG. So…no pressure. I actually felt much better about this presentation simply because there were a ton of questions. And they were really, really great questions.
Wrap Up
Overall, Kscope18 was a great experience, as always. This was my 9th Kscope in a row. I am really hoping to have a presentation selected next year to make it an even 10 years! So…hopefully I’ll see everyone next year in Seattle!

Brian Marshall
June 25, 2018

As my family and I wrapped our first-ever family trip to Disney World, I received four e-mails from ODTUG. The first two that I read let me know that two of my abstracts had been declined. The second two let me know that two of my abstracts had been accepted! So…back to Disney World I go! I’m extremely honored and excited to have been selected to speak at Kscope18. Thank you to everyone on the Kscope18 selection committee for the countless hours required to review over 1000 abstracts and narrow it down to around 300. So what will I be presenting?
Teaching On-Prem Planning some PBCS Tricks
Who says you can’t teach an old dog new tricks? PBCS has continued to add features since its initial release at a very good pace. Unfortunately, the exact opposite is true for on-premise Hyperion Planning. With little more than basic patches having been made available, PBCS is well over 3 years ahead of its on-premise sibling. If you are stuck with on-premise Hyperion Planning and would like to add some of the cool new features from PBCS to your application, this session is for you. Learn how to harness the power of Smart Push…on-premise. Learn how to create and use Smart Lists dynamically connected to dimensions…on-premise. This session will come complete with various demos and sample code to get you up and running with your shiny new PBCS tricks on your old-school on-premise application.
ASO and BSO and Hybrid! Oh My!
Essbase started off with BSO, and everyone loved it. BSO is still loved by many in the Essbase community, but many applications work much better in ASO. Now Oracle has given us Hybrid mode…which is also great. But which technology is the best? The answer, as many consultants like to tell clients is…it depends. Each choice has strengths and weaknesses…some unexpected. We’ll discuss tuning Hybrid applications using non-traditional block sizes. We’ll discuss why ASO can’t always be replaced by Hybrid applications. We’ll even talk about Smart Push and why Hybrid may not matter for many applications. If you are considering which storage option to use for your application, this session should help you make a decision. We’ll even throw in a live demo to show some of the massive differences between each storage technology.
See you at the Dophin and the Swan!

Brian Marshall
March 13, 2018
As we all know, Oracle has put virtually all of their development efforts into the cloud. This is especially true for the EPM Suite of products (PBCS, FCCS, ARCS, etc.). As a result, PBCS keeps getting great new features that we may never see in on-premise Hyperion Planning. I was talking to Jake Turrell today and we were comparing notes on the new functionality that we have used in PBCS on projects recently. That conversation devolved into us making a rather long list of new features. Special thanks to Jake for helping me make this list, as I wouldn’t have thought of a good portion of the things on it without his help. So what new functionality has been added to PBCS that will likey never make it to on-premise Hyperion Planning?
Forms
Hyperion Planning has existed for over 15 years now, so you might think that the form design capabilities would be fully-baked by now. For the most part, this is a true statement. But, there have been some pretty big holes that PBCS has finally filled. Two new additions in particular make for a better form design experience for developers: Exclusions and Ranges.
Exclusions
In Planning, when we attempt to select members, that’s the only option…select members. In PBCS, they have added the ability to edit the selection (our old select members option) and the ability to add exclusions. Exclusions give us an easy way to take, for example, inclusive descendants of our entity dimension while excluding a specific list. This is particularly useful when we are referencing a substitution variable or a user variable. We don’t know the full extent of what could be returns, but we do know what we definitely don’t want.
Ranges
When you do monthly forecasting, nothing has been more annoying in form design than the inability to easy specify a range of members. In Planning, I can’t just ask the form to give me Jan through &CurrentMonth in one column and &CurrentMonth through Dec in another column. This means to really make my forms dynamic, I need more substitution variables than I’m comfortable with and a form that has a ton of columns with the combinations. In PBCS, I now have four new member selection functions that allow me to put together a range:
- Left Siblings
- Left Sibling (inc)
- Right Siblings
- Right Siblings (inc)
Finally! I can do a range of members with just two columns and a single substitution variable!
Formatting
We can now format our forms! You can change colors, font styles, add lines, along with other formatting options. These options will show up in Excel and in the Simplified Interface. This does not work in Workspace…but who cares, it’s officially dead in PBCS anyway as of the February release coming out shortly.
Smart Forms
Not to be confused with regular forms…we have Smart Forms. This is an exciting new feature that allows you to take an ad hoc form, add formulas, and save them to the actual form! While this is cool for a demo, I’m not necessarily a fan in practice. While it is much better than building formulas in an actual form, which is painful, it still presents a problem. Why are you doing form math? In general I try to put math back in the Essbase model rather than having formulas on multiple forms.
Periods
In Planning, if I want to add periods to just a single plan type, I’m totally out of luck. The boxes are all grey and there’s no way around it. In PBCS, I can now simply un-check the plan types from which I would like to exclude the member. This is a simple feature, but makes a massive difference in the flexibility in our designs.
Years
For literally years I’ve helped companies add and delete years from Planning applications. There are a few ways to do this, but none of them are supported or in the interface. In PBCS, if I want to delete a year, I simply select the year and click the delete button. Again, this is super-simple, but so very nice to have. Additionally, if I want to add years in the past, I can now do this in the interface! Simply add the number of years you wish to add, and when PBCS asks if you would like to add them to the end, click no. Now you have years years in the past. This feature is a little more obfuscated, but still pretty simple.
Data Maps
On-Premise planning does have the idea of a reporting cube and it does give you the ability to create some level of mapping. But it definitely doesn’t do what PBCS does. PBCS has the ability to map and move data on the fly and then it takes it a step further: Smart Push. Smart Push is one of the most amazing features that they have added to PBCS. For many applications, it gives us the ability to have an ASO cube with live data from our BSO cube with no crazy partitions or really any work at all beyond the mapping. So as long as we input to our BSO cube and report from our ASO cube, I may never need to aggregate my BSO cube again.
It is fair to mention that while this functionality is not baked into Planning, if you really need it, you can build it from the ground up with some fancy scripting on the back end. Even still, it doesn’t hold a candle to the ease of use and stability of Smart Push.
Valid Intersections
I’ve been demoing Planning and Essbase for a very long time. When people ask what benefit Essbase might have over Planning, there are very few good answers. One of those answers however has always been that Essbase can support what we call matrix security. This is essentially the ability to allow a user to have write access to a cross dimensional set of intersections. For instance, for Entity A I can modify Account 1000 while for Entity B I can modify Account 2000. Planning simply doesn’t support that. I have to give a user Entity A, Entity B, Account 1000 and Account 2000. That user will be able to modify all combinations.
PBCS fixes this. With valid intersections, I can create a set of intersections as defined above and limit the user’s ability to write back to invalid intersections. From a security perspective, they still have access, but with valid intersections, they lose it. Many people wanted valid intersections to give us the ability to cascade member selections across dimensions, which would be cool, but this functionality is just as useful.
SmartLists
I know what you’re thinking, Planning has SmartLists. But PBCS has SmartLists that can be dynamically created directly from a dimension. This means that I can provide the user with a list of accounts. Big deal…who cares, right? I care if I add an account. With this new functionality, when an account is added, the SmartList is updated automagically. Ok…that is a big deal. Not content with this already amazing feature, Oracle took it a step further. You can also reference the value of a SmartList in a calculation. This means that I can use the selection in a SmartList to truly manipulate data. Basically a new alias is created that references the OBJECT_ID. That OBJECT_ID is also used as the value stored in Essbase for the SmartList selection. Combined, I can easily reference the member that the SmartList is linked to. Like I said…big deal. Huuuuuge even.
Attribute Dimensions
This is another item that has some support in Planning, but missed the point. I can technically add attribute dimensions to a Planning application and I can use them in a variety of ways. But the two ways I need to be able to use them are missing. They can’t be used in a form. They can’t be used in Smart View. I can technically use an Essbase connection directly and use them for analysis, but that only works on BSO and doesn’t work at all on ASO Plan Types.
PBCS fixes both of these issues. I can layer in attribute dimensions easily on forms. It also fixes the Smart View issues by allowing for attribute dimension selection in the Planning Ad Hoc connector. We’ve only been asking for this in Planning for a decade. The chances seem so very slim that we actually ever see it given the list ten years.
Navigation Flows
Technically speaking, the simplified interface is available in 11.1.2.4. But I don’t think it could possibly be any worse than it is. It’s essentially there for dashboards and everything partially works. The simplified interface in PBCS on the other hand is pretty great. It may require 100 extra clicks for a variety of administrative functions, but for end-users, I would consider it an upgrade.
One of the reasons I believe this is the addition of navigation flows. I can create my own customized tile interface for my application and assign it to a user. This means I can really create a user-specific interface tailored for a specific set of business processes. This helps me put together a pretty awesome demo and makes end-users feel like it is a more truly customized application.
But wait, there is a downside. I love navigation flows. And if your users are primarily in the web-based interface, they are amazing. If the majority of your users are in Excel however…they will totally be out of luck. Navigation flows haven’t made it over there yet. I’m not even sure if they can without a major interface overhaul.
Dashboards
While we are on the topic of the simplified interface, let’s discuss dashboards. They do exist, like the simplified interface, in 11.1.2.4. But, much like the entire simplified interface in 11.1.2.4, they aren’t great. PBCS has also added a variety of new visualization types:
- Combination Graphs (seriously, how is this not in on-premise)
- Funnel
- Radar
- Tile
While I believe PBCS dashboards are fantastic, they do have at least one major downside. Again, they don’t work in Smart View. But, it’s a dashboard, so I’ll give Oracle a free pass.
Browser Support and Mobile Support
For a very long time, Internet Explorer was it with Hyperion. Finally, Oracle finally brought Firefox into the fold. Now, with PBCS, it really doesn’t matter what platform you work on. The simplified interface is fully compatible with Internet Explorer, Firefox, Chrome, and Safari. This is of particular importance given how easily I can access PBCS from my phone or tablet. The interface is great on mobile devices. This is an area where dashboards can really shine. To get mobile access in Planning, I have to bribe somebody in IT to open ports on the firewall. And frankly, I don’t think any of us have enough money to afford the bribe necessary for that to happen.
Localization
If you haven’t done a lot of international applications, you probably don’t care about this at all. But companies with users all over the world, PBCS has made life much, much better. First is the ability for PBCS to automatically detect your language settings in your browser and to automatically translate everything that’s built in. Oracle has taken this a giant leap further and added something called Artifact Labels. Essentially I can add languages and labels to all of my objects now. Instead of a form being Revenue Input for all of my languages, I can now label that form in any language. This is pretty impressive compared to Planning.
Application Reporting
No, not financial reports, but reports about the application. Planning essentially provide nothing in the way of reporting. You can get a variety of information out of the repository, but that’s just painful. PBCS has added a wealth of reporting options. Here’s a quick list:
- User Login Report – When and how often are users in the system?
- Form Definition Report – Great for documentation, this produces a PDF of selected forms with the entire definition in a nice set of tables. Rows, column, POV, page, business rules, etc.
- Approval Status Report – How can I tell where everyone is on their approvals? This will produce a report providing just that in a variety of formats including XLSX and HTML.
- Access Control Report – See how everyone is provisioned. It will show either explicitly assigned rights or effective rights. Pretty convenient.
- Activity Reports – Check out what your users are up to.
- Access Logs – Get the full picture of everything that happened.
- Audit Report – Finally, I don’t have to query the HSP_AUDIT_RECORDS table. I also don’t have to go to the specific cell. I can run a quick export to Excel. Not perfect, but I’ll take it.
Groovy Business Rules
With EPBCS, I can now write business rules in Groovy. These rules can go far beyond the simple bounds of Essbase data. They can pull context from the application itself. I am sad that this feature has not yet and will likely not ever make it into regular PBCS. Here’s hoping.
LCM Maturity
I’ve been using LCM for a long, long time. I can’t point to specific things in LCM that are better, but I can describe LCM in PBCS as more “mature.” It just feels more stable and seems to work better. This could just be in my head (and Jake’s)…
Academy
I know, on-premise applications have a ton of documentation. But, there’s something to be said for easy access to what I’m looking for. There is a ton of content on the Academy and much if it is especially useful for new users. Planning for new users are basically on their own.
No Infrastructure Needs
For those of you that do infrastructure, this is not a plus. But for the rest of us, not needing to install and configure the system is just easy. I don’t have to worry about something in IT getting messed up. I don’t have to worry about applying patches. Having said that, you do lose control of your infrastructure. But hey, it’s the cloud.
No VPN Necessary
I mentioned earlier that I can finally access my PBCS application with my mobile devices. The cloud makes this so much easier. Not only that, but if you need to give your consultant access to the system, it takes 5 minutes and doesn’t require hours of paperwork and begging of IT. I love not needing yet another VPN connection just to modify a form.
Free FDMEE!
Okay, so it isn’t FDMEE. But for most client, it does more than enough. And again…it is free. So stop complaining that it only loads text files.
Conclusion
Having said all of that, and it was a lot, PBCS still isn’t for everyone. But as time passes and development continues for PBCS while it stands still for Planning, it is becoming more and more difficult to ask the question why PBCS? Instead we really have to ask why NOT PBCS?
Brian Marshall
January 29, 2018

In case you haven’t heard, there is now a South Texas Hyperion User Group! Their first event is happening this Thursday, October 26, 2017 and like the NTxHUG, things will start up around 3:00 PM. That’s the when…here’s the where:
Noble Energy
1001 Noble Energy Way
Houston, TX 77070
Once the event ends, there will be a social hour here:
Baker Street Pub & Grill
17278 Tomball Pkwy
Houston, TX 77064
Things will start off with a Meet and Greet along with a welcome from the Organization Committee. Jake Turrell will give everyone an introduction to ODTUG and then get into his presentation on comparing PBCS to On-Prem Planning. Once he wraps up, Annie Hoang will provide a customer store about the Evolution of EPM at Parker Drilling. It sounds like the Houston crew has a great event planned for Thursday!
Brian Marshall
October 24, 2017

It has been far too long since my last post and it will be a little longer before we get to another technical post. In just one week, the next North Texas Hyperion User Group will be meeting in Dallas. The event is on October 19, 2017 and things will start up around 3:00 PM. That’s the when…here’s the where:
Balfour Beatty Construction
3100 McKinnon
Dallas, TX 75201
Balfour has been kind enough to host in the past and it has always made for a great place to meet. Once the event ends, there will be a social hour here:
Katy Trail Ice House
3127 Routh St
Dallas, TX 75201
Trey Daniel will kick things off with his presentation: That’s Not in the Documentation: Gotchas From Implementing PBCS at GameStop. After Trey, there will be an Oracle representative getting us all up to speed on the on-premise roadmap. It should be a great a event and we hope to see everyone there! You can find more information here.
Brian Marshall
October 12, 2017
Introduction to Weekly Offenders
This week was so light, I thought I would wait a few days to see if we might get a few more contributions. We finally had a few more posts, so here goes!

Cameron has a post about his guest post on Mark Rittman’s site. Check out the actual post here.
Keith has a very interesting post on installing the EPM stack on Linux with SQL Server…for Linux. What?!
Peter continues where he left off with the REST API for PBCS.
Glenn takes on the question of commentary in Essbase. He uses Linked Reporting Objects in OAC. I find it extremely interesting that OAC supports LRO’s as I thought they were being depricated.
Jason gives us a post about time period conversion using Drillbridge. I need to get back to doing some Drillbridge blog posts…
John Goodwin shows us how to create a custom scheduler for FDMEE. This should help with some of the shortcomings of the stock solution.
The Proactive Support Blog has posted a variety of patches, mostly for cloud services:
Vijay has a post showing us how to restart the ODI Standalone Agent with a Groovy script. Cool stuff. Pete also contributes to the same site and has a post on JavaScript and DRM.
Be On the Lookout!
That’s it for this week! Be on the lookout for more great EPM Blog content next week!
Brian Marshall
August 3, 2017