Essbase Performance Series: Part 2 – Local Storage Baseline (CDM)

Welcome to part two of a series that will have a lot of parts.  In our introduction post, we covered what we plan to do in this series at a high level.  In this post, we’ll get a look at some synthetic benchmarks for our various local storage options.  The idea here is to see first how everything measures up in benchmarks before we get into the actual benchmarking in Essbase.

As we discussed in our introduction, we have three basic local storage options:

  • Direct-attached physical storage on a physical server running our operating system and Essbase directly
  • Direct-attached physical storage on a virtual server using VT-d technology to pass the storage through directly to the guest operating system as though it was physically connected
  • Direct-attached physical storage on a virtual server using the storage as a data store for the virtual host

For the purposes of today’s baseline, we have the following direct-attached physical storage on the bench for testing:

  • One (1) Samsung 850 EVO SSD (250GB)
    • Attached to an LSI 9210-8i flashed to IT Mode with the P20 firmware
    • Windows Driver P20
  • Four (4) Samsung 850 EVO SSD’s (250GB)
    • Attached to an LSI 9265-8i
    • Windows Driver 6.11-06.711.06.00
    • Configured in RAID 0 with a 256kb strip size
  • One (1) Intel 750 NVMe SSD (400GB)
    • Attached to a PCIe 3.0 8x Slot
    • Firmware 8EV10174
    • Windows Driver 1.5.0.1002
    • ESXi Driver 1.0e.1.1-1OEM.550.0.0.139187
  • Twelve (12) Fujitsu MBA3300RC 15,000 RPM SAS HDD (300GB)
    • Attached to an LSI 9265-8i
    • Windows Driver 6.11-06.711.06.00
    • Configured three ways:
      • RAID 1 with a 256kb strip size
      • RAID 10 with a 256kb strip size
      • RAID 5 with a 256kb strip size

So what benchmarks were used?

  • CrystalDiskMark 5.0.2
  • Anvil’s Storage Utilities 1.1.0.337

And the good stuff that you skipped to anyway…benchmarks!

We’ll start by looking at CrystalDiskMark results.   The first result is a sequential file transfer with a queue depth of 32 and a single thread.  There are two interesting results here.  First, our RAID 10 array in passthrough is very slow for some reason.  Similarly, the Intel 750 is also slow in passthrough.  I’ve not yet been able to determine why this is, but we’ll see how it performs in the real world before we get too concerned.  Obviously the NVMe solution wins overall with our RAID 0 SSD finishing closely behind.

CDM Seq Q32T1 Read

Next we’ll look at a normal sequential file transfer.  We’ll see here that all of our options struggle with a lower queue depth.  Some more than others.  Clearly the traditional hard drives are struggling along with the Intel 750.  The other SSD options however are much closer in performance.  The SSD RAID 0 array is actually the fastest option with these settings.

CDM Seq Read

Next up is a random file transfer with a queue depth of 32 and a single thread.  As you can see, on the random side of things the traditional hard drives, even in RAID, struggle.  Actually, struggling would probably be a huge improvement over what they actually do.  The Intel 750 takes the lead for the physical server, but it actually gets overtaken by the RAID 0 SSD array for both of our virtualized tests.

CDM 4K Q32T1 Read

Our final read option is a normal random transfer. Obviously everything struggles here.  A big part of this is just not having enough queue depth to take advantage of the potential of the storage options.

CDM 4K Read

Next we will take a look at the CrystalDiskMark write tests.  As with the read, we’ll start with a sequential file transfer using a queue depth of 32 and a single thread.  Here we see that the RAID 0 SSD array takes a commanding lead.  The Intel 750 is still plenty fast, and then the single SSD rounds out the top three.  Meanwhile, the traditional disks are still there…spinning.CDM Seq Q32T1 Write

Let’s look at a normal sequential file transfer.  For writes, our traditional drives clearly prefer lower queue depths.  This can be good or bad for Essbase, so we’ll see how things look when we get to the real-world benchmarks.  In general, our top three mostly hold with the RAID 0 traditional array pulling into third in some instances.

CDM Seq Write

On to the random writes.  We’ll start off the random writes with a queue depth of 32 and a single thread.  As with all random operations, the traditional disks get hammered.  Meanwhile, the Intel 750 has caught back up to the RAID 0 SSD array, but is still back in second place.

CDM 4K Q32T1 Write

And for our final CrystalDiskMark test, we’ll look at the normal random writes.  Here the Intel 750 takes a commanding lead while the RAID 0 SSD array and the single SSD look about the same.  Again, more queue depth helps a lot on these tests.

CDM 4K Write

In the interest of making my blog posts a more reasonable length, that’s it for today.  Part three of the series will be more baseline benchmarks with a different tool to measure IOPS (I/O’s per second), another important statistic for Essbase.  Then hopefully, by part four, you will get to see some real benchmarks…with Essbase!

Part five teaaser:

Accelatis Sneak Peak

 

The EPM Week In Review: Week Ending June 4, 2016
The EPM Week In Review: Week Ending June 11, 2016

Comments

Leave a Reply

Your email address will not be published / Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.