Hyperion EPM Week In Review: September 30, 2016

You know you are running late on your Hyperion EPM Week in Review when someone asks you about in on a phone call…  It’s been about 10 days since my last week in review, so welcome to this weeks “Hyperion EPM 10 Days in Review.”
Hyperion EPM Week In Review

Hyperion EPM Patches and Updates:

Hyperion Financial Reports has been released.  Looks like mostly bug fixes.

The October updates for the cloud will start flowing out next Friday!  We have PBCS updates-a-plenty.  You can read more about those here.  We also have some ARCS updates that you can read about here.  Finally, we have FCCS updates that you can read about here.

Hyperion EPM Blog Posts:

We’ll start off with a post from Robert on the death of on-prem!  Well, it may be more of a question, but he has a great headline.

Summer has a post about the cloud and how to support your applications.  Managed services are becoming far more popular as it becomes more difficult and expensive to find an admin.

Dayalan has his third part of his Essbase Web Services series.  He covers the admin services portion of the web service.  He then follows that up with part four covering the Datasource Service.

Christian has a post regarding some settings in the BIOS to improve performance in Essbase.  Definitely an interesting post.  Obviously hyper-threading is bad and he also found that working with memory interleaving can help.

Glenn has a quick post on some Excel Add-In’s that cause odd behavior with Smart View.  I’m excited to hear more.  He also has a post on migrating from EIS to Studio.  I would just convert things back to a normal build and use DrillBridge personally.

I covered the next part in my series on FreeNAS.  I also spent some time with network storage getting ready for my Essbase benchmarks.

Sibin has several posts again this week.  First he covers Category Mappings in FDMEE.  Next he moves on to the Open Interface in FDMEE.  After that he covers the DATAEXPORT command in Essbase.  Finally he covers more information on the Open Interface Adapter.

Tim has some more information about Essbase Cloud Services.  This should be an interesting read if you haven’t heard much yet.

John Goodwin has a part one of a series covering Oracle’s REST Data Services and how to integrate that with the EPM stack.  Good read…very in-depth.  Not to be outdone by himself, he gives us part two as well.

Jason has a funny post about TBC going bankrupt.  Give it a minute and it will make sense.

Will has a post about some new features in DRG: Workflow Task Calculate Name and Calculate Parent.

Henri has a great post on the HFM Java API.  More parts are coming and I can’t wait to read them.

Opal talks a lot about FCCS and where she thinks development is heading.  This is a good read and FCCS is definitely going to open up a lot of new opportunities in our EPM market.

Gary has a collection of highlights from the Oracle docs to help everyone out with Smart View.  As more of us are forced to give up the Excel add-in, this will become more and more important.

Cameron really wants to you join ODTUG.   I’m a member…are you?  ODTUG brings us Kscope and a whole host of other important activities, so go become a paying member!

Oracle OpenWorld Coverage

OpenWorld was last week  I’ve recapped all of the blogging here by person including prior posts to have a complete list of coverage.




Kscope 17 Abstracts

Friendly reminder to those of you interested in presenting at Kscope17 next year…the deadline is rapidly approaching with just two weeks left.  That’s right, the deadline is October 14!  Go submit some abstracts!

Essbase Performance: Part 4 – Network Storage (CDM)


Welcome to part four of a series that will have a lot of parts.  In our lost two parts we took a look at our test results using the CrystalDiskMark and Anvil synthetic benchmarks.  As we said in parts two and three, the idea here is to see first how everything measures up in synthetic benchmarks before we get into the actual benchmarking of Essbase Performance.

Before we get into our network options, here’s a re-cap of the series so far:

Network Storage Options

Today we’ll be changing gears away from local storage and moving into network storage options.  As I started putting together this part of the series, I struggled with the sheer number of options available for configuration and testing.  I’ve finally boiled it down to the options that makes the most sense.  At the end of the day, if you are on local physical hardware, you probably have local physical drives.  If you are on a virtualized platform, you probably have limited control over your drive configuration.

So with that, I decided to limit my testing to the configuration of the physical data store on the ESXi platform.  Now, this doesn’t mean that there aren’t options of course.  For the purposes of this series, we will focus on the two most common network storage types:  NFS and iSCSI.


NFS, or Network File System, is an ancient (1984!) means of connecting storage to a network device.  This technology has been built into *nix for a very long time and is very stable and widely available.  The challenge with NFS for Essbase performance relates to how ESXi handles writes.

ESXi basically treats every single write to an NFS store as a synchronous write.  Synchronous writes require a response back from the device that they have completed properly.  This is great for security of data, but terrible for write performance.  Traditional hard drives are very bad at this.  You can read more about this here, but basically this leaves us with a pair of options.

Add a SLOG

So what on earth is a SLOG?  In ZFS, there is the concept of the ZFS Intent Log.  This is a temporary location where things go before they are sent to their final place on the storage volume.  The ZIL exists on the storage volume, so if you have spinning disks, that’s where the ZIL will be.

The Separate ZFS Intent Log (SLOG for short) allows for a secondary location to be defined for the Intent Log.  This means that we can we something fast, like an SSD, to perform this function that spinning disks are quite terrible at.  You can reach a much better description here.

Turn Sync Off

The second option is far less desirable.  You can turn synchronous writes off on the volume altogether.  This will make performance very, very fast.  The huge downside is that no synchronous writes will ever happen on that volume.  This is basically the opposite of how ESXi treats an NFS volume.


iSCSI, or Internet Small Computer System Interface, is a network-based implementation of the classic SCSI interface.  You just trade in your physical interface for an IP-based interface.  iSCSI is very common and does things a little differently than NFS.  First, it doesn’t actually implement synchronous writes.  This means that data is always written asynchronously.  This is great for performance, but opens up some risk of data loss.  ZFS makes life better by making sure the file system is safe, but there is always some risk.  Again we have a pair of options.

Turn Sync On

You can force synchronization, but then you will be back to where NFS is from a performance perspective.  For an NVMe device, this will perform well, but with spinning disks, we will need to move on to our next option.

Add a SLOG (after turning Sync On)

Once we turn on synchronous writes, we will need to speed up the volume.  To do this, we will again add a SLOG.  This will allow us to do an apples to apples comparison of NFS and iSCSI in the same configuration.

Essbase Performance

Because all of these things could exist in various environments, I decided to test all of them!  Essbase performance can vary greatly based on the storage sub-system, so I decided to go with the following options:

  • Eight (8) Hitachi 7K3000 2TB Hard Drives, four (4) sets of two (2) mirrors
  • Eight (8) Hitachi 7K3000 2TB Hard Drives, four (4) sets of two (2) mirrors with an Intel S3700 200GB SLOG
  • Eight (8) Hitachi 7K3000 2TB Hard Drives, four (4) sets of two (2) mirrors with sync=disabled (NFS) or sync=always (iSCSI)
  • One (1) Intel P3605 1.6TB NVMe SSD
  • One (1) Intel P3605 1.6TB NVMe SSD with sync=disabled (NFS) or sync=always (iSCSI)

I then created a four (4) datasets:

  • One (1) dataset to test NFS on the Hard Drive configurations
  • One (1) dataset to test iSCSI on the Hard Drive configurations
  • One (1) dataset to test NFS on the NVMe configurations
  • One (1) dataset to test iSCSI on the NVMe configurations

So what benchmarks were used?

  • CrystalDiskMark 5.0.2
  • Anvil’s Storage Utilities


And the good stuff that you skipped to anyway…benchmarks!

If you’ve been following the rest of this series, we’ll stick with the original flow.  Basically, we will take a look at CrystalDiskMark results first, and then we’ll move over to Anvil in our next part.

CDM Sequential Q32T1 Read


In our read tests we see NFS outpacing iSCSI in every configuration.  All of the different configurations don’t really make a difference for reads in these tests.

CDM 4K Random Q32T1 Read


The random performance shows the same basic trend as the sequential test.  NFS continues to outpace iSCSI here as well.

CDM Sequential Read


The trend continues with NFS outpacing iSCSI in sequential read tests, even at lower queue depths.

CDM 4K Random Readnetwork-cdm-4k-read

The trend gets bucked a bit here at a lower queue depth.  iSCSI seems to take the lead here…but ever don’t let the graph fool you.  It’s really immaterial.

CDM Sequential Q32T1 Writenetwork-cdm-seq-q32t1-write

Write performance is a totally different story.  Here we see the three factors that drive performance:  synchronous writes, SLOG, and media type.  NFS and iSCSI are inverse of each other by default.  NFS forces synchronous writes while iSCSI forces asynchronous writes.

Clearly asynchronous writes win out every time given the fire-and-forget nature.  The SLOG does help in a big way.  I do find it interesting that our S3700 SLOG seems to perform better on iSCSI than even the NVMe SSD.  Once we get to actual Essbase performance, we’ll see how this holds up.

CDM 4K Random Q32T1 Write


Random performance follows the same trend as sequential performance in our write tests.  iSCSI clearly fairs better for random performance so long as an SSD is involved.

CDM Sequential Write


At a lower queue depth, the results follow the trend established in the higher queue depth write tests.  There is still an oddity in the SLOG outpacing the NVMe device.

CDM 4K Random Read


In our final test, we see that random performance is just terrible across the board at lower queue depths.  Clearly much better with a SLOG or an NVMe device, but as expected, very slow.

That’s it for this post!  In our next post, we’ll take a look at the Anvil benchmark results while focusing on I/O’s per second.  I promise we’ll make it to actual Essbase benchmarks soon!

My First FreeNAS: Part 2 – Install, Test, and Configure


It has been quite a while since my last post on my new FreeNAS build.  This project was placed on the back-burner while I had a lot going on.  I’m finally starting to get everything stabilized, so now I’m back at working on my new FreeNAS box.  You can view part 1 of this series here, but just as a quick re-cap, let’s talk about the system specs.  Here is a revised list (things in bold have been changed from Part 1):

  • SuperChassis 846TQ-R900B (with Supermicro 1200W model PWS-1K21P-1R)
  • (2) E5-2670 @ 2.6 GHz
  • Supermicro X9DR7-LN4F-JBOD
  • 256GB Registered ECC DDR3 RAM (16 x 16GB)
  • Noctua i4 Heatsinks
  • (5) Noctua NF-R8 (to bring the noise level down on the chassis)
  • (2) SanDisk Cruzer 16GB CZ33
  • (2) Intel S3500 80GB SSD’s
  • (1) Supermicro AOC-2308-l8e
  • (3) Full-Height LSI Backplates (for the AOC-2308’s and the P3605)
  • (6) Mini-SAS Breakout Cables
  • Intel P3605 1.6TB PCIe SSD
  • (9) 2TB HGST Ultrastar 7K3000 Hard Drives
  • (4) 10Gb Twinax DAC Cables
  • (2) Intel X520-DA2

So why did I make these changes?  The power supply was a no-brainer.  The R900B chassis was really, really loud.  I have another Supermicro 846 with a PWS-1K21P which is totally tolerable.  So I went ahead and picked one of those up from Ebay for pretty cheap.  I decided to get rid of the USB drives and replaced them with S3500’s.  I’ve had two USB sticks go bad on me in the last two months and was having some issues during the installation, so I found a pair of new S3500’s a great price.

Finally, I decided to use the eight (8) SATA 3G ports on the motherboard so that I could save a PCIe slot for the controller.  I just hooked up classic hard drives to those ports, which can’t even come close to saturating the 3G speeds.  This lets me add another PCIe SSD or an external SAS card to add more drives in an expander.

FreeNAS Installation

Now that the system is finally built and has gone through the burn-in process, it’s time to install FreeNAS.  This is a pretty simple process that is very well documented, so I won’t bore you with my screenshots.  Instead, go check out the FreeNAS site here:

FreeNAS Installation Documentation

I did install directly to a mirrored set of drives, which as you will read in the documentation is as easy as pressing the spacebar!

Once you get through the installation process, it will tell you the IP address you should use to go access the web-based GUI.  This is where the fun really starts.  Again, there are guides for everything that I did, so I’ll instead run through my high-level steps and provide links to the resources that I made use of.

Final Burn-In

Before I made it to the actual configuration of FreeNAS, I had to finish my burn-in process.  In part 1, we completed the burn-in on the majority of our hardware, but now we have to burn in the hard drives.  If you are using all SSD’s, you don’t have to worry about this.  If you have hard drives, even brand new hard drives, you should burn them in.  Luckily, there’s a great burn-in guide on the FreeNAS forums:

FreeNAS Burn-In Guide

FreeNAS Initial Configuration

Now that everything is tested and burn-in has been completed was set up a storage volume.  You really need at least one of these configured before you can move on to the rest of the steps.  I highly recommend reading up on ZFS if you haven’t already.  There’s a great guide on the FreeNAS forums for those of you completely new to ZFS.  Check it out here:

ZFS Primer Presentation

I set up a pair of volumes to start.  First I set up my classic hard drive volume.  I chose to use a striped set of mirrored disks to give me the best performance and redundancy for my benchmarking use.  This is basically what hardware RAID would could RAID 10.  I will use this volume to test network storage for my Essbase benchmarks.  I’ve also set up a single volume using my 1.6TB PCIe SSD.  Just to break up the text…you may remember this from part 1:FreeNAS ZFS zPool

FreeNAS ZFS zPool

Next I moved on to getting my network configuration up and running.  For me, this was setting up the global settings like DNS servers and the Gateway.  Then I moved on to set up the networking for my server.  Basically I added my main network interface and changed the IP to a static address.  Both of these topics are covered here:

FreeNAS Network Configuration Documentation

Now that the basic network configuration is complete, we can move on to a slightly  This one isn’t covered as well in the FreeNAS documentation, but a community member has written an excellent guide that I found very helpful.  As a bonus, it also helps you set up your first CIFS share for Windows users on your domain to share:

FreeNAS Active Directory Configuration Documentation

FreeNAS Advanced Configuration

And with that, we can finally move on to having FreeNAS support the on-going benchmarking effort!  There are two main network storage technologies that are by far the most common in the Hyperion EPM world:  NFS and iSCSI.  I started with NFS and created a pair of datasets: one on my hard drive volume and one on my NVMe volume.  I used this guide:

FreeNAS NFS Configuration with ESXi

The guide is a little out of date, but the main thing to remember is that you need to set the maproot property, or ESXi will not work properly.  I also have my root passwords set the same, which is a requirement.  I’m sure there is a better, more secure way to do this, but on my home lab, I’m content.

Once I completed the configuration of my NFS shares, I turned my attention to iSCSI.  NFS requires basically a single screen to set everything up.  iSCSI on the other hand, is a bit more involved.  I followed this guide:

FreeNAS iSCSI Configuration with ESXi

I also found another guide that has more detail, albeit older, and is from a much more trusted source:

Older FreeNAS iSCSI Configuration with ESXi

Conclusion and Next Steps

Now that I’ve made it through my initial setup of FreeNAS, I’ve had an opportunity to run a few quick benchmarks.  Here’s a benchmark of one of my NFS shares:

FreeNAS NFS Hard Drive with SLOG

And here’s one of my iSCSI shares:FreeNAS iSCSI Hard Drive with SLOG


I’ll have a lot more information on this in my next post.  We’ll dive into benchmarking each of the configurations and make a variety of changes to improve performance.  It should make for an interesting read for the truly nerdy out there!

Hyperion EPM Week In Review: September 20, 2016

Last week was clearly the calm before the Oracle OpenWorld storm!  This week in Hyperion EPM has been really, really busy!
Hyperion EPM Week In Review

Hyperion EPM Patches and Updates:

Hyperion DRM is now available.  It appears to be a few bug fixes.  Hopefully your bug was fixed!

Hyperion EPM Blog Posts:

Rodrigo continues his series on ODI 12c.  This part of the series covers loading dimensions and cubes using natural keys with the promise of a future post on surrogate keys.

Sibin had four posts this week.  First he covers how to delete a Planning application manually.  Next he covers a very specific drill-through issue with FDMEE.  Sticking with FDMEE, he moves on to updating the period tables.  Finally he continues on with FDMEE and shows off the functionality of drill-through in Planning.

Celvin had two posts this week between a long flight and a meeting that doesn’t interest him.  First he has an update to his NUMSysCmdLauncher utility that includes some new filter functionality along with a rewrite in Groovy.  Next up he has an intro to JSON for the REST API in PBCS.  Interesting read…as always.

Dayalan has part two of his series covering Essbase Web Services.  Looks complicated!

Jason had three posts this week.  First he covers simple drill-through in Dodeca.  Next he tells us that he is speaking at OOW (good luck Jason!).  Finally he has a new version of his Camshaft tool for executing MDX queries.

Eric shows off Oracle’s new online ordering for on-prem software.  Right now you can only purchase Essbase and a few of the reporting pieces, so no real EPM software…yet.

I have a post this week showing off my skills (or lack thereof) at recovering Oracle DB’s after an unexpected failure.

Oracle OpenWorld Coverage

OpenWorld is this week, in case you missed the massive amount of coverage.  I’ve recapped all of the blogging here by person.  If I get time later, I’ll consolidate this all into one big OOW blog post after it ends!




Another quick shout out for the ODTUG GeekAThon.  Only four (4) days left to complete your entries!!!

You must submit your video entry by September 23, 2016, 11:59 PM Pacific Time.

You must submit your solution document by September 23, 2016, 11:59 PM Pacific Time.

And for those of you who need a visual to assist in your procrastination, the red are the days past as of this post.  The green is the day you should be ready for, but will likely need to work right up until 11:59 to meet. 😉



One last thing…they have officially announced prizes.  Now go finish building something awesome!

Recovering an Oracle Database from Unexpected Failure

Oracle Database Introduction

I’ve been a user, developer, and administrator of SQL Server databases many times over the years.  I’m really comfortable with that product and how to fix things when they break.  Oracle Database on the other hand, I’ve only ever really been a consumer of information.  Someone who writes queries to the data.  As I’ve worked to expand my horizons in my lab, Oracle Databases are one of the things that I found very interesting.

One things about Oracle Databases that I found surprising is that they do not do well in unexpected shutdowns at all.  So, to help those that have an Oracle Database server that really don’t know that much (like me!), I’ve compiled a list of things that have helped me get my system back up and running if I have an unexpected shutdown.

Getting Connected in SQL*Plus

First, if you have more than one Oracle SID on your server, you need to set ORACLE_SID to the SID you want to work on.  This should be simple, unless you are like me and included a space between the equal sign.  So, these are the commands I use to set my SID correctly (again…note that there is NO SPACE):


Once you have set your SID, you can then fire up sqlplus without logging in:

sqlplus /nolog

Now we can go ahead and log in as the SYSDBA:

connect / AS SYSDBA

I suggest verifying that you set your SID correctly after you login.  This saved me a lot of time, once I figured out that I was just connected to the wrong SID:

select name from v$database;

If we’ve done everything right, we should get something along these lines:

Oracle Database Verify SID

If you don’t see the write SID, then check to make sure that you don’t have any spaces in your SET command.

Checking The Status

Now that we are sure that we are connected to the correct SID, let’s check to see how things look.  We’ll use this query:

select status, database_status from v$instance;

And here’s the result:

Oracle Database Check Status

OPEN and ACTIVE indicates that your database is good to go.  If you see something else, you may need to mount and/or open the database.  To mount the database, try this:

alter database mount;

If that works, you can move on to opening the database using this:

alter database open;

If you can’t mount and open the database, there’s a good chance that you will need to recover the database:


This has only worked for me a few times.  If that fails, I generally go pull a daily backup of my Oracle DB VM and restore it.  If you can’t even make it this far, and you haven’t been able to connect because you see something along the lines of Connected to an idle instance.  Try first shutting down your database:

shutdown abort

Once that completes, fire it back up:


Now you can go back to the beginning of this section and verify that everything looks good.

What About Container Databases?

Now that Oracle DB support multi-tenant, you have the idea of container databases (CDB) and pluggable databases (PDB).  First, you can use everything from the above sections to connect to your CDB and get it back up and running.  This means that CDB’s are relatively straightforward.

What About Pluggable Databases?

You may be wondering why I even bother using CDB’s and PDB’s.  Jake even makes it a point to suggest using a plain old Oracle Database in his post.  Well…I did that and it works great.  But then I wanted some sample data, which of course comes in the form of a PDB!  Getting that to work was an adventure, but when it stopped working, I happened across a great Oracle resource related to multi-tenant databases:

Performing Basic Tasks in Oracle Multitenant

This works great for me, given how basic I am.  Once you get connected to your CDB, you can quickly get a list of the PDB’s that you might want to connect to:

select name, open_mode from v$pdbs;

And now we can actually connect:

connect sys/oracle@ AS SYSDBA

Basically this tells it to connect to the IP of my Oracle DB ( and to connect to the pdborcl PDB.  After connecting, you can perform operations like opening a database that is mounted:

alter pluggable database pdborcl open;


There it is…everything I know (yes…not much) about recovering an Oracle DB from an unexpected failure (like a really long power outage and your UPS not lasting long enough).  If this post helps just one Oracle DB newbie like me, it will have been worth it!

Hyperion EPM Week In Review: September 14, 2016

Welcome to another later edition of the Hyperion EPM Week In Review.  I actually waited a little later this week as it was a really, really slow week.  No PBCS updates, no FCCS updates, no on-prem updates.
Hyperion EPM Week In Review

Hyperion EPM Patches and Updates:

While this isn’t a patch or an update, the new Profitability and Cost Management Cloud Service has popped up on the Oracle Cloud website.  There is a video, some screenshots, and no pricing information.  This looks promising…

Hyperion EPM Blog Posts:

Celvin dug into the new 16.09 patch for PBCS and discovered a major change to the backup process.  He also shows off some new Smart View functionality.

Glenn has a quick post highlighting the differences in security for PBCS (and Planning) and FCCS.  Basically, all of the dimensions are secured by default.

Dayalan gives us a peek behind the Essbase covers while taking a look at the Essbase Web Services.  Pretty geeky stuff…I love it!

Sibin had not one, not two, but three posts this week.  First he takes a look at exporting data from PBCS using data management.  Next up, he covers the PBCS REST API’s and understanding Oracle’s idea of an error.  Finally, he shows us what happens in the Planning repository when we add a new data source for Planning.

Harry has an update to his cubeSavvy tool.  This update brings relational database queries!  Sounds like an exciting way to enable drilling from one source to another.

Like I said…a slow week.


Another quick shout out for the ODTUG GeekAThon.  For those of us real geeks out there, they have a competition worthy of the maker community.  You can find the announcement here.  You can find the competition website here.  And a quick re-cap:

GeekAThon Announcement

GeekAThon Competition Website

A few important dates for those of you interested:

You must register by August 15, 2016, at 11:59 PM Pacific Time.

You must submit your video entry by September 23, 2016, 11:59 PM Pacific Time.

You must submit your solution document by September 23, 2016, 11:59 PM Pacific Time.

And for those of you who need a visual to assist in your procrastination, the red are the days past as of this post.  The green is the day you should be ready for, but will likely need to work right up until 11:59 to meet. 😉



One last thing…they have officially announced prizes.  Now go build something awesome!

Hyperion EPM Week In Review: September 8, 2016

Welcome to a really late Hyperion week in review…but hey there was a holiday.  This week we have a broken Essbase patch along with a lot of great blogging content.Hyperion EPM Week In Review

Hyperion EPM Patches and Updates:

First off, Essbase has been released.  Sadly, if you use encrypted MaxL commands…they broke this.  So stay away from this patch if you require that functionality!

Hyperion EPM Blog Posts:

Eric helps everyone get ready for Oracle Open World which is coming up in just over a week.  He has a great summary of all of the Hyperion EPM and Oracle BI related sessions that you might find interesting.

Cameron has a great post from his guest blogger Igor Slutskiy.  He covers using Cell Text data from Planning in OBIEE.  This involves some relational wizardry including use of my favorite relational database:  The Planning Repository.  Cameron also has his own post about a Smart View survey.  Do you use it?! He also has a post about his (and Tim’s) meetup at Oracle Open World.  I’m a regular attendee at his (and Natalie’s) Kscope meetup, but I won’t be at OOW, so have some fun for me.

Sibin had a busy week with a pair of posts.  First he talked about the need for Identity Domains.  Next he has a cool new utility to parse Planning (and PBCS) meta-data.  For those of us missing EAS in the new Cloud world, this could be useful.

Gary walks us through his complex Studio Drill-Through implementation.  Very interesting.  I still prefer DrillBridge, but that’s just me. 😉  He also has a PSA regarding new Smart View installations.  I’ve not seen this problem happen yet, but definitely worth a read so that if/when it happens to you, you know how to fix it.

Glenn takes us through FCCS from an Essbase perspective.  I found this interesting and as an Essbase and Planning guy, I find FCCS interesting given its foundation.

Celvin also has a pair of posts this week.  He starts with the mother of all CDF’s.  I haven’t tried this yet, but I’m excited to!  Next up he takes a look at finding all of your formulas buried in your financial reports.

Opal has a pair of quick tipes for the cloud.  First she shows us how to fix the fun an exciting connection error problem when you are going from one cloud pod to another.  Next she walks us through connected to FR in the Cloud using the installed FR Studio.

Kyle shares some extremely useful PowerShell scripts for splitting data files.  I’m a huge PowerShell fan, so I really enjoy content like this.

Harry has a follow-up to his prior post on pulling Essbase data in Power Query.  This reminds me of my Analysis Services days writing applications to pull multi-dimensional data with XML/A.  It sucks for MSAS too Harry, it’s not just an Essbase problem!

Wayne takes us on a tour of BICS Deliveries.  For the EPM people out there…think of it like FR Batches, but for BI!


Another quick shout out for the ODTUG GeekAThon.  For those of us real geeks out there, they have a competition worthy of the maker community.  You can find the announcement here.  You can find the competition website here.  And a quick re-cap:

GeekAThon Announcement

GeekAThon Competition Website

A few important dates for those of you interested:

You must register by August 15, 2016, at 11:59 PM Pacific Time.

You must submit your video entry by September 23, 2016, 11:59 PM Pacific Time.

You must submit your solution document by September 23, 2016, 11:59 PM Pacific Time.

And for those of you who need a visual to assist in your procrastination, the red are the days past as of this post.  The green is the day you should be ready for, but will likely need to work right up until 11:59 to meet. 😉


One last thing…they have officially announced prizes.  Now go build something awesome!