The EPM Week In Review: Week Ending June 25, 2016

It seems like everyone has finished their Kscope presentations but me…as this was the busiest week I’ve encountered since I started doing my week in review.

Patches and Updates:

On-Prem:

The Planning 11.1.2.4.005 patch has been released.  This includes a laundry list bug fixes.  We’ve waited a while for this one…

The FR 11.1.2.4.702 patch has been released.  When I looked at the readme, I only saw the 701 notes…so I’m not entirely sure what this fixed.

Cloud:

As of July 1st, you should see the updates coming for PBCS, EPBCS, and FCCS in your test environments.  This is the giant PBCS update we’ve all been waiting for that adds literally thirty (yes 30!) pages worth of new functionality and fixes to both PBCS and EPBCS.  Check out the official PDF’s here:

PBCS July Update

EPBCS July Update

FCCS July Update

New Blog Posts:

This week I started my series on building your own NAS server with FreeNAS.  I also mentioned my upcoming presentations at Kscope.

Gary has published the first release of his new SV++ utility.  This is an entirely new interface for the Smart View plug-in for Excel.  I’m pretty impressed.  I have dozens of clients that will find this very, very useful.  Think of this as the new generation of the In2Hyperion add-in (thanks Kyle!).  We do need to talk to Gary about his marketing, as he didn’t even post a screenshot of the tool.  So I’ll help him out a little:

GaryAddin

The DEVEPM team is on their way to Kscope from a combination of Brazil and Ireland.  I wish I was going to be there longer this year so that I could try to meet up with everyone.  Perhaps I should organize and EPM bloggers lunch on Tuesday.

Keith takes us through the differences in rounding using FDM Classic with VB and FDMEE with Jython.  For all you coding nerds…good stuff.  For everyone in general, great information.

Sibin, whose Exploits in Hyperion blog was just added to the EPM Blog page, has two posts this week.  First he covers an issues with Weblogic that looks like loads of fun.  Next he shows us a bug in PBCS that he discovered.  Hopefully this gets fixed…in the August push?  Or is this Smart View?  Good luck Sibin!

Speaking of Kyle, he has an entire blog post devoted to Jake Turrell.  We do need to work on his spelling of Jake’s last name…and maybe I need to get Kyle to drive traffic to my blog?  Kyle?  Mind sending some people my way? 😉  But in all seriousness, you should definitely check out Jake’s presentations.  He does an amazing job and I can guarantee that you will learn something useful.

Jason is doing his best Opal impression this week with a blog-a-day leading up to Kscope.  He is traveling today, so we’ll give him a pass for missing Friday.  Here’s a recap of the cool stuff he covered this week:

Multiple Retrieves in Dodeca

Cascading Report Summary Sheets in Dodeca

PBJ PBCS Client GitHub repository

Advanced integration with PBJ Java PBCS REST API library

Cascading Reports with Dodeca

Kscope16 is almost here!

Tim is getting ready for Kscope as well.  Another person I should probably at least try to say hello to next week.  He gives us a few of his selections and let’s us know when he is presenting.

Francisco has a ridiculously detailed post on loading cell text into HFM from flat files.  Like…all the steps…ever.  Impressive stuff.

Celvin shows us how to get Text ID’s from PBCS.  This is way easier on-prem, but handy info with so many new projects going to the cloud.

Sarah shows us how to install Oracle’s REST Data Services (ORDS) in Standalone mode.  She too is getting ready for Kscope.  Based on her post, we can assume she REALLY like Kscope.  If you are a first-timer to the conference, this is a definite must-read.  The list of people I need to say hello to at Kscope continues to grow out of control.

Robert has a very detailed guide on using Essbase calc scripts with FDMEE.  He even shows us how to use parameters with those calc scripts.  I’m sure I’ll be referencing this later down the line.

Vinjay has a follow up to his earlier post on Groovy and REST.  This time he shows us how ODI can use our Groovy code.  Very cool.

Opal again had a busy week.  Perhaps not as blog-happy as Jason, but she has some great news:

Opal’s book on EPRCS has been released!  Congrats Opal!

FCCS First Look:  Creating an Application

EPBCS Series:  Navigation Flows

Other News:

Can there really be any news left?  I expect next week to be full of “I’m at Kscope” blogs.  So brace yourself for another long blog post.  Everyone travel safe to Chicago and I hope to see you all there!


Brian @ Kscope16

It’s that time of year again…Kscope!  Unfortunately, Kscope always occurs during my busy season.  As a result, much like last year, this will be a very quick in and out trip.  I’ll be there Tuesday, June 28th only.  Luckily, both of my presentations happen to be on that day, so it worked out nicely for me.  I also have a few meetings with Oracle that I’m not allowed to talk about that occur on that day as well.  Here are my presentations:

PBCS is Still Hyperion Planning

Jun 28, 2016, Session 9, 11:15 am – 12:15 pm

With Oracle’s release of PBCS, many service companies have started releasing their one-size-fits-all implementations. The unfortunate truth is that PBCS is still Hyperion Planning. This presentation discusses the best practices around implementing PBCS and how to avoid the pitfalls of implementations offered at insanely cheap (and underestimated) pricing. Attend this session if you don’t want to have your PBCS project “land and expand.”

The Planning Repository Revisited: Beyond the Basics

Jun 28, 20165, Session 10, 2:00 pm – 3:00 pm

If you’ve enjoyed my past presentations on the Planning repository, you should enjoy this presentation even more. We’ll take a step beyond the basics and provide a whole new set of examples that take a leap into real-world use. Whether it’s synchronizing metadata across applications or deleting dimensions, this presentation will dive deeper than ever before into the repository. But wait…there’s more. This presentation will have full samples in both Transact SQL for SQL Server users and PL/SQL for Oracle users. That’s two languages for the price of one (shipping and handling not included).

And of course, you can find me on the Kscope16 site here.

I hope everyone enjoys the content!


My First FreeNAS Build

As my lab has grown over the years, so have my storage needs.  Today I have an array dedicated to backing up files.  That array consists of twelve (12) 2TB drives configured in a RAID 6 array.  This gives me roughly 18.1 TB of storage.  That sounds like a ton of storage.  Unfortunately, I’ve run out of space.  I’ve even started saving less back-ups of some of my VM’s to conserve on space.  At one point, I had less than 1TB of space remaining.

Obviously, anywhere near 10% free space really isn’t acceptable anyway.  This presents a problem…my current file server is a virtual machine on my original ESXi box.  This server is completely full of drives.  Additionally, I think I’m ready to graduate to a real NAS (network attached storage) system.

FreeNASLogo

Enter FreeNAS.  FreeNAS is an open-source operating system designed for network attached storage servers.  At its core, it built on FreeBSD with all of the storage being handled by something called ZFS.  ZFS is another open-source product, this time a file system.  So instead of FAT or NTFS that we see in Windows, ZFS is an enterprise file system that is focused on ensuring data integrity.

On top of ZFS, FreeNAS has an excellent GUI with a variety of additional features that make it an attractive NAS.  It has built-in file-sharing features like SMB/CIFS, FS, FTP, iSCSI, and others.  It has full support for ZFS snapshots (think virtual machines, but for your system), replication, and encryption.  It also has plugins!  Media servers, private cloud services, and plenty of other cool things that run in something FreeBSD calls jails.  Basically the plugin is walled off from the rest of your server.

And with that, let’s move on to the hardware.  To determine what hardware I needed, I first took at look what I need my NAS to do.  First and foremost, I need a place to back everything up.  So I need at least one array of big traditional disks for that purpose.  A good rule of thumb for me has always been to upgrade to at least twice the amount of space you have now.  So I have 20TB of RAW storage, I need at least 40TB of RAW storage for this purpose.

Second, I have a series going on Essbase performance.  While all of the local storage benchmarks will be very interesting, many (if not most) companies are using network storage for their virtualized environments.  The amount of performance here doesn’t really matter as much.  I need enough to do high performance network storage testing.  So I need some type of SSD-based drive or array for this purpose.

Finally, I would like to have a network-based datastore for my VMware cluster.  This needs to be somewhere between the first two.  It needs speed, but also a lot of space.  This is another area that FreeNAS can help.  FreeNAS with ZFS uses RAM to provide a read cache.  On top of this, you can plug in a second level of read cache and a second level of write cache in the form of SSD’s.  This will give you performance similar to SSD for many activities against your larger data store.  This is similar to the tiered storage that is available on many enterprise SAN’s.

This also gives us another way to test Essbase performance.  Specifically, we can test how well the write cache works with an Essbase cube.  Because the write cache only stages synchronous writes, we’ll get to see how well that works with an Essbase database compared to other types of databases that generally work quite well with this setup.

Back to the rest of our hardware…we definitely need a lot of RAM.  Clearly, FreeNAS and ZFS are going to eat up quite a bit of CPU, especially I decide to use any of the plugins.  And of course, this is a Network Attached Storage server, so we need some serious network connectivity.  Gigabit just won’t do.  So what did I decide on?  Let’s take a look:

Processor(s)(2) Intel Xeon E5-2670 @ 2.6 GHz
MotherboardSupermicro X9DR7-LNF4-JBOD
Memory256 GB - (16) Samsung 16 GB ECC Registered DDR3 @ 1600 MHz
ChassisSupermicro CSE-846TQ-R900B
ChassisSupermicro CSE-847E16-RJBOD1
HBA(1) Supermicro AOC-2308-l8e
HBA(1) LSI 9200-8e
NVMeIntel P3600 1.6TB NVMe SSD
Solid State Storage(2) Intel S3700 200GB SSD
Hard Drive Storage(9) HGST Ultrastar 7K3000 2TB Hard Drives
Hard Drive Storage(17) HGST 3TB 7K4000 Hard Drives
Network Adapter(2) Intel X520-DA2 Dual Port 10 Gbps Network Adapters

If you happened to read my series on building a home lab, you might recognize some of the parts.  I stuck with the E5-2670 as they are even cheaper now than ever before.  I did have to move away from ASRock motherboard to a Supermicro board.  This board has a built-in SAS2 controller, six (6) PCIe slots, and sixteen (16) DIMM slots.  I’m going with 256GB of DDR3 RAM which should support our plugins, our primary caches, and the secondary caches nicely.  I’ve also purchased a pair of Intel X520-DA2 network cards to provide four (4) 10GB ports.

I added to the onboard controller a pair of matching LSI-based 2308 controllers to give me 24 ports of SAS2.  This fit nicely with my Supermicro 846TQ, which has 24 hot-swap bays and a redundant power supply.  And that power supply is connected to a 1500VA UPS so that we can ensure that during a power outage, our data remains intact.  FreeNAS again helps us out with built-in UPS integration.

 

So now that we’ve talked about the server a fair amount, what about the actual storage for the server.  I’ll start by setting up a single-disk array with the 1.6TB NVMe SSD.  This should provide enough speed to max out a 10GB connection for many of my Essbase related tests.

zpool3

I’ll also be setting up an 8-disk stripped set of mirrored 2TB drives.  This is equivalent to RAID 10 and should provide the best mix of performance and redundancy.  I’ll have a ninth drive in there as a hot spare should one of the drives fail.  In addition, this is the easiest array to actually expand in ZFS.

zpool4

I also have a pair of Intel S3700 200GB SSD’s to use as an L2ARC (second level read cache) and/or ZIL/SLOG (write cache).  We’ll be testing Essbase performance in three different configurations:  just the hard drives, the hard drives with the write cache, and the hard drives with the write cache and the second level read cache.  These configurations will closely resemble many of the SAN’s that my clients deal with on a daily basis.

The final piece of the storage component of the new NAS serve at the backup device for the network.  I’ll be setting up a 10-disk RAID-Z2 array with 5TB, 6TB, or 8TB drives.  This is basically the ZFS version of RAID 6 which will provide me with 40TB, 48TB, or 64TB of storage.  Here’s an example of what this will look like:

zpool1

Now that we’ve covered storage, we can talk about how everything is going to be connected.  My lab setup has three ESXi hosts and an X1052 switch.  The switch has 48 ports of 1Gb ethernet, but only four (4) ports of 10Gb ethernet.  Four ports, four servers!  But I really would like to have 10Gb between all of my servers AND 10Gb for my network-based data stores.  This is why we have two X520-DA2 cards.  This will allow me to connect one port to the switch so that everything is on the 10Gb network and also allow each server to connect directly to the FreeNAS server without a switch.

This means that each server will need two 10Gb ports as well.  Two of the servers will have X520-DA2 network cards.  One port will connect to the switch, the other will connect to the FreeNAS server directly.  The last server will actually have two X520-DA1 network cards.  This allows me to test the difference between passing through the X520-DA1 using VT-d and using the built-in network functionality.  This will be similar to the testing of passthrough storage and data stores.

The hardware has already started to arrive and I’ve begun assembling the new server when I have free time.  I’ll try to actually document this build for the next post before we get into the actual software side of things.  Until then…time to stop procrastinating on my final Kscope preparations.

Oh, and here is my final build list of everything I have ordered or will be ordering to complete the system:

  • SuperChassis 846TQ-R900B
  • (2) E5-2670 @ 2.6 GHz
  • Supermicro X9DR7-LN4F-JBOD
  • 256GB Registered ECC DDR3 RAM (16 x 16GB)
  • Noctua i4 Heatsinks
  • (5) Noctua NF-R8 (to bring the noise level down on the chassis)
  • (2) SanDisk Cruzer 16GB CZ33
  • (2) Supermicro AOC-2308-l8e
  • (3) Full-Height LSI Backplates (for the AOC-2308’s and the P3605)
  • (6) Mini-SAS Breakout Cables
  • (10) 5TB Toshiba X300 Drives, 6TB HGST NAS Drives, or 8TB WD Red Drives
  • Intel P3605 1.6TB PCIe SSD
  • (9) 2TB HGST Ultrastar 7K3000 Hard Drives
  • (4) 10Gb Twinax DAC Cables
  • (2) Intel X520-DA2
  • CyberPower 1500VA UPS

The EPM Week In Review: Week Ending June 18, 2016

This week was again on the slower side.  Opal tried to make up for it all on her own…

Patches and Updates:

Notta…

New Blog Posts:

This week I continued my Essbase performance series with another set of baseline benchmarks.  After Kscope I’ll be able to dedicate a lot more time to continuing this series.  I also created a new page dedicated to providing a list of bloggers and blogs in the EPM community.

Gary shows us around the Essbase 12 docs.  An interesting read, though I’m not sure I agree with all of his conclusions.

Jason talk about spreadsheet management.  He has some great thoughts on how spreadsheets themselves are not necessarily the problem, but rather how we manage those spreadsheets.  Planning helps in one way while Dodeca helps in another.

Dmitry has perhaps the longest blog post of all time.  While it may be long, it is definitely worth the read as he describes how to use Essbase with a statistical application named R.  Very, very interesting stuff.

Dayalan has an in-depth article on installing 11.1.2.4 on Windows 2012.  He covers Planning, Essbase, R&A, and FDMEE.  I get the impression he might be covering DRM next.

John Goodwin gives us part three of his DRM and FDMEE series.  I know a lot of us have met him, but are we sure he is just one guy?  Are there John Goodwin clones running around learning things for him?  Seriously…

Cameron shares his “must see” sessions at Kscope16.  One day I’ll make the list…oh who am I kidding.

Harry has yet another beta release of his web-based Essbase interface.  This thing keeps getting cooler and cooler.  This time has added ad-hoc grid creation and modification for end-users.

And now on to Opal, who decided to do a seven-day series on EPBCS.  That’s seven blog posts in seven days…nice job Opal.  Great content…and lots of it!

Other News:

Not that you could, but don’t forget about Kscope16!  This time next week the EPM world will descend on Chicago.  Watch out Chicago!

Happy Father’s Day to all you Dad’s out there!


Essbase Performance Series: Part 3 – Local Storage Baseline (Anvil)

Welcome to part three of a series that will have a lot of parts.  In our we took a look at our test results using the CrystalDiskMark synthetic benchmark.  Today we’ll be looking at the test results using a synthetic benchmark tool named Anvil.  As we said in part two, the idea here is to see first how everything measures up in synthetic benchmarks before we get into the actual benchmarking in Essbase.

Also as discussed in part two, we have three basic local storage options:

  • Direct-attached physical storage on a physical server running our operating system and Essbase directly
  • Direct-attached physical storage on a virtual server using VT-d technology to pass the storage through directly to the guest operating system as though it was physically connected
  • Direct-attached physical storage on a virtual server using the storage as a data store for the virtual host

As we continue with today’s baseline, we still have the following direct-attached physical storage on the bench for testing:

  • One (1) Samsung 850 EVO SSD (250GB)
    • Attached to an LSI 9210-8i flashed to IT Mode with the P20 firmware
    • Windows Driver P20
  • Four (4) Samsung 850 EVO SSD’s (250GB)
    • Attached to an LSI 9265-8i
    • Windows Driver 6.11-06.711.06.00
    • Configured in RAID 0 with a 256kb strip size
  • One (1) Intel 750 NVMe SSD (400GB)
    • Attached to a PCIe 3.0 8x Slot
    • Firmware 8EV10174
    • Windows Driver 1.5.0.1002
    • ESXi Driver 1.0e.1.1-1OEM.550.0.0.139187
  • Twelve (12) Fujitsu MBA3300RC 15,000 RPM SAS HDD (300GB)
    • Attached to an LSI 9265-8i
    • Windows Driver 6.11-06.711.06.00
    • Configured three ways:
      • RAID 1 with a 256kb strip size
      • RAID 10 with a 256kb strip size
      • RAID 5 with a 256kb strip size

So what benchmarks were used?

  • CrystalDiskMark 5.0.2 (see part two)
  • Anvil’s Storage Utilities 1.1.0.337

And the good stuff that you skipped to anyway…benchmarks!

Now that we’ve looked at CrystalDiskMark, we’ll take a look at Anvil results.  While Anvil results include reads and writes in megabytes per second, we’ll instead focus on Inputs/Outputs per Second (IOPS).  Here we see that the Intel 750 is keeping pace nicely with the RAID 0 SSD array.  In this particular test, even our traditional drives don’t look terrible.Anvil Seq Read

Next up we’ll look at the random IOPS performance.  So much for our traditional drives.  Here we really see the power of SSD’s versus old-school technology.  It is interesting that all three solutions hover pretty closely together.  But this is likely a queue depth issue.Anvil 4K Read

Let’s see how things look with a queue depth of four.  Things are still pretty clustered here, but much higher across the board.Anvil 4K QD4 Read

And now for a queue depth of 16.  Now this looks better.  The Intel 750 has, for the most part, easily outpaced the rest of the options.  The RAID 0 SSD array looks pretty good here as well.

Anvil 4K QD16 Read

That’s it for the read tests.  Next we move on to the write tests.  We’ll again start with the sequential writes.  Before we expand our queue depths, the RAID 0 SSD array is looking like the clear winner.

Anvil Seq Write

Our random write test seems to follow closely to our random read test.  The Intel 750 is well in the lead with the other SSD options trailing behind.  Also of interest, the Intel 750 seems to struggle when physically attached and as a data store in these tests. We’ll see if this continues.

Anvil 4K Write

When the queue depth increases, to four, we see the Intel 750 continue to hold its lead.  The RAID 0 SSD array is still trailing the regular single SSD.  As with the previous random test, the Intel 750 continues to struggles, though the physical test has improved.
Anvil 4K QD4 Write

 

Finally, we’ll check out the queue depth at 16.  It looks like our physical Intel 750 has finally caught up to the passthrough.  This feels like an odd benchmark result, so we’ll see how this looks in real Essbase performance.  We also finally see that the RAID 0 SSD array has pulled ahead of the single drive by a large margin.

Anvil 4K QD16 Write

Next up…we’ll start taking a look at the actual Essbase performance for all of these hardware choices. That post is a few weeks away with Kscope rapidly approaching.


The EPM Week In Review: Week Ending June 11, 2016

This was a really, really slow week for blogging.  It’s almost like something is happening that is occupying everyone’s time.  Drop me a line if you know what’s going on.

Patches and Updates:

I mentioned last that Essbase 11.1.2.4.010 had been released.  It finally made its way to the Proactive Support Blog.

HPCM 11.1.2.4.121 has been released.  There appears to be a massive amount of new features, including a new REST API.  Check out the readme.

Calc Manager 11.1.2.4.006 has been released.  This appears to be general bug-fixes.  Hopefully it fixes your bug!  Check out the readme.

I’ve heard that the Planning 11.1.2.4.005 patch was due out in the next week or so…until they delayed it to add one more bug fix.  That bug fix should only take a week or so to add according to my source, so we might get this patch prior to Kscope…we’ll just have to wait and see.

New Blog Posts:

This week (5 minutes ago), I posted part 2 of my Essbase performance series.

Cameron is a little survey-happy in advance of Kscope.  He has a blog about the future of Planning (PBCS) and this survey.  Be sure to take the survey!

Chris has a blog post showing off the new attribute functionality that was added to PBCS in the June July and then June again release.  I’ve heard the 06 patch may be the 07 patch…but for those whose pods were purchased after the patch was actually ready.  We’ll see what changes in 07…

Dayalan talks about the Outline Load Utility.  He has a ton of code samples related to SQL with the utility.  Good stuff.

Harry has yet another beta release of his web-based cubeSavvy UI.  I wish Oracle could release updates this fast.  This week he brings us the ability to execute calc scripts when you submit data!

John Goodwin has part 2 of his FDMEE and DRM integration series.  I know I saw this every time he has a blog post, but…good stuff.  Posts so detailed a caveman can do FDMEE and DRM integration.

Other News:

There’s a minor get-together of Oracle professionals in Chicago in a few weeks.  Can’t wait to see everyone…and I mean everyone will be there.  Even though its minor.


Essbase Performance Series: Part 2 – Local Storage Baseline (CDM)

Welcome to part two of a series that will have a lot of parts.  In our introduction post, we covered what we plan to do in this series at a high level.  In this post, we’ll get a look at some synthetic benchmarks for our various local storage options.  The idea here is to see first how everything measures up in benchmarks before we get into the actual benchmarking in Essbase.

As we discussed in our introduction, we have three basic local storage options:

  • Direct-attached physical storage on a physical server running our operating system and Essbase directly
  • Direct-attached physical storage on a virtual server using VT-d technology to pass the storage through directly to the guest operating system as though it was physically connected
  • Direct-attached physical storage on a virtual server using the storage as a data store for the virtual host

For the purposes of today’s baseline, we have the following direct-attached physical storage on the bench for testing:

  • One (1) Samsung 850 EVO SSD (250GB)
    • Attached to an LSI 9210-8i flashed to IT Mode with the P20 firmware
    • Windows Driver P20
  • Four (4) Samsung 850 EVO SSD’s (250GB)
    • Attached to an LSI 9265-8i
    • Windows Driver 6.11-06.711.06.00
    • Configured in RAID 0 with a 256kb strip size
  • One (1) Intel 750 NVMe SSD (400GB)
    • Attached to a PCIe 3.0 8x Slot
    • Firmware 8EV10174
    • Windows Driver 1.5.0.1002
    • ESXi Driver 1.0e.1.1-1OEM.550.0.0.139187
  • Twelve (12) Fujitsu MBA3300RC 15,000 RPM SAS HDD (300GB)
    • Attached to an LSI 9265-8i
    • Windows Driver 6.11-06.711.06.00
    • Configured three ways:
      • RAID 1 with a 256kb strip size
      • RAID 10 with a 256kb strip size
      • RAID 5 with a 256kb strip size

So what benchmarks were used?

  • CrystalDiskMark 5.0.2
  • Anvil’s Storage Utilities 1.1.0.337

And the good stuff that you skipped to anyway…benchmarks!

We’ll start by looking at CrystalDiskMark results.   The first result is a sequential file transfer with a queue depth of 32 and a single thread.  There are two interesting results here.  First, our RAID 10 array in passthrough is very slow for some reason.  Similarly, the Intel 750 is also slow in passthrough.  I’ve not yet been able to determine why this is, but we’ll see how it performs in the real world before we get too concerned.  Obviously the NVMe solution wins overall with our RAID 0 SSD finishing closely behind.

CDM Seq Q32T1 Read

Next we’ll look at a normal sequential file transfer.  We’ll see here that all of our options struggle with a lower queue depth.  Some more than others.  Clearly the traditional hard drives are struggling along with the Intel 750.  The other SSD options however are much closer in performance.  The SSD RAID 0 array is actually the fastest option with these settings.

CDM Seq Read

Next up is a random file transfer with a queue depth of 32 and a single thread.  As you can see, on the random side of things the traditional hard drives, even in RAID, struggle.  Actually, struggling would probably be a huge improvement over what they actually do.  The Intel 750 takes the lead for the physical server, but it actually gets overtaken by the RAID 0 SSD array for both of our virtualized tests.

CDM 4K Q32T1 Read

Our final read option is a normal random transfer. Obviously everything struggles here.  A big part of this is just not having enough queue depth to take advantage of the potential of the storage options.

CDM 4K Read

Next we will take a look at the CrystalDiskMark write tests.  As with the read, we’ll start with a sequential file transfer using a queue depth of 32 and a single thread.  Here we see that the RAID 0 SSD array takes a commanding lead.  The Intel 750 is still plenty fast, and then the single SSD rounds out the top three.  Meanwhile, the traditional disks are still there…spinning.CDM Seq Q32T1 Write

Let’s look at a normal sequential file transfer.  For writes, our traditional drives clearly prefer lower queue depths.  This can be good or bad for Essbase, so we’ll see how things look when we get to the real-world benchmarks.  In general, our top three mostly hold with the RAID 0 traditional array pulling into third in some instances.

CDM Seq Write

On to the random writes.  We’ll start off the random writes with a queue depth of 32 and a single thread.  As with all random operations, the traditional disks get hammered.  Meanwhile, the Intel 750 has caught back up to the RAID 0 SSD array, but is still back in second place.

CDM 4K Q32T1 Write

And for our final CrystalDiskMark test, we’ll look at the normal random writes.  Here the Intel 750 takes a commanding lead while the RAID 0 SSD array and the single SSD look about the same.  Again, more queue depth helps a lot on these tests.

CDM 4K Write

In the interest of making my blog posts a more reasonable length, that’s it for today.  Part three of the series will be more baseline benchmarks with a different tool to measure IOPS (I/O’s per second), another important statistic for Essbase.  Then hopefully, by part four, you will get to see some real benchmarks…with Essbase!

Part five teaaser:

Accelatis Sneak Peak

 


The EPM Week In Review: Week Ending June 4, 2016

Patches and Updates:

Dodeca Spreadsheet Management System Version 7 has been released.  The new Dodeca Excel Add-In comes long for the ride.

Smart View 11.1.2.5.600 is finally out!  It has a long list of changes.  If only the PBCS update was here to make the most of it…

New Blog Posts:

John Goodwin gives us a great post on FDMEE and DRM integration.  This is Part 1…so more greatness to come!

Vijay writes about Groovy, REST, and HPCM.  Groovy is pretty cool and now that HPCM has a REST API, this is a nice introduction to the combination.

Harry has another update to his web-based Essbase front-end.  You can now have multiple Essbase environments!

Cameron has a blog post about Hybrid Essbase and interacting with our friends at Oracle.  Check it out.  You will also see a link below about the survey.

Glenn talked about the DataExport command.  This is a great reference for how the format of the export file is actually derived (from your outline) and can be modified (with data export options).

Gary has a great bit of information about a new feature in the 11.1.2.4.010 patch for Essbase.  We can finally use MDX queries to pull data from Essbase in a usable format!  I’ll be trying this out soon, I’m sure.

A different Gary broke the news that the 11.1.2.5.600 version of Smart View had been released.  Thanks for the heads up!

Opal has a quick tip for those of us with multiple cloud instances.  Since everything is labeled PBCS, this is definitely useful.  Hopefully Oracle will fix this in the near future…maybe in the June July update.

Other News:

Jon Booth, Tim German, Mike Nader, and Cameron Lackpour are conducting an online survey about Essbase Hybrid.  Be sure to head over here and fill it out.

Kscope16 Advance Registration Ends on June 9th!

Komal Goyal has been named the ODTUG 2016 Women in Technology Scholar.  Congrate Komal!