Essbase Performance: Part 4 – Network Storage (CDM)

Introduction

Welcome to part four of a series that will have a lot of parts.  In our lost two parts we took a look at our test results using the CrystalDiskMark and Anvil synthetic benchmarks.  As we said in parts two and three, the idea here is to see first how everything measures up in synthetic benchmarks before we get into the actual benchmarking of Essbase Performance.

Before we get into our network options, here’s a re-cap of the series so far:

Network Storage Options

Today we’ll be changing gears away from local storage and moving into network storage options.  As I started putting together this part of the series, I struggled with the sheer number of options available for configuration and testing.  I’ve finally boiled it down to the options that makes the most sense.  At the end of the day, if you are on local physical hardware, you probably have local physical drives.  If you are on a virtualized platform, you probably have limited control over your drive configuration.

So with that, I decided to limit my testing to the configuration of the physical data store on the ESXi platform.  Now, this doesn’t mean that there aren’t options of course.  For the purposes of this series, we will focus on the two most common network storage types:  NFS and iSCSI.

NFS

NFS, or Network File System, is an ancient (1984!) means of connecting storage to a network device.  This technology has been built into *nix for a very long time and is very stable and widely available.  The challenge with NFS for Essbase performance relates to how ESXi handles writes.

ESXi basically treats every single write to an NFS store as a synchronous write.  Synchronous writes require a response back from the device that they have completed properly.  This is great for security of data, but terrible for write performance.  Traditional hard drives are very bad at this.  You can read more about this here, but basically this leaves us with a pair of options.

Add a SLOG

So what on earth is a SLOG?  In ZFS, there is the concept of the ZFS Intent Log.  This is a temporary location where things go before they are sent to their final place on the storage volume.  The ZIL exists on the storage volume, so if you have spinning disks, that’s where the ZIL will be.

The Separate ZFS Intent Log (SLOG for short) allows for a secondary location to be defined for the Intent Log.  This means that we can we something fast, like an SSD, to perform this function that spinning disks are quite terrible at.  You can reach a much better description here.

Turn Sync Off

The second option is far less desirable.  You can turn synchronous writes off on the volume altogether.  This will make performance very, very fast.  The huge downside is that no synchronous writes will ever happen on that volume.  This is basically the opposite of how ESXi treats an NFS volume.

iSCSI

iSCSI, or Internet Small Computer System Interface, is a network-based implementation of the classic SCSI interface.  You just trade in your physical interface for an IP-based interface.  iSCSI is very common and does things a little differently than NFS.  First, it doesn’t actually implement synchronous writes.  This means that data is always written asynchronously.  This is great for performance, but opens up some risk of data loss.  ZFS makes life better by making sure the file system is safe, but there is always some risk.  Again we have a pair of options.

Turn Sync On

You can force synchronization, but then you will be back to where NFS is from a performance perspective.  For an NVMe device, this will perform well, but with spinning disks, we will need to move on to our next option.

Add a SLOG (after turning Sync On)

Once we turn on synchronous writes, we will need to speed up the volume.  To do this, we will again add a SLOG.  This will allow us to do an apples to apples comparison of NFS and iSCSI in the same configuration.

Essbase Performance

Because all of these things could exist in various environments, I decided to test all of them!  Essbase performance can vary greatly based on the storage sub-system, so I decided to go with the following options:

  • Eight (8) Hitachi 7K3000 2TB Hard Drives, four (4) sets of two (2) mirrors
  • Eight (8) Hitachi 7K3000 2TB Hard Drives, four (4) sets of two (2) mirrors with an Intel S3700 200GB SLOG
  • Eight (8) Hitachi 7K3000 2TB Hard Drives, four (4) sets of two (2) mirrors with sync=disabled (NFS) or sync=always (iSCSI)
  • One (1) Intel P3605 1.6TB NVMe SSD
  • One (1) Intel P3605 1.6TB NVMe SSD with sync=disabled (NFS) or sync=always (iSCSI)

I then created a four (4) datasets:

  • One (1) dataset to test NFS on the Hard Drive configurations
  • One (1) dataset to test iSCSI on the Hard Drive configurations
  • One (1) dataset to test NFS on the NVMe configurations
  • One (1) dataset to test iSCSI on the NVMe configurations

So what benchmarks were used?

  • CrystalDiskMark 5.0.2
  • Anvil’s Storage Utilities 1.1.0.337

Benchmarks

And the good stuff that you skipped to anyway…benchmarks!

If you’ve been following the rest of this series, we’ll stick with the original flow.  Basically, we will take a look at CrystalDiskMark results first, and then we’ll move over to Anvil in our next part.

CDM Sequential Q32T1 Read

network-cdm-4k-q32t1-read

In our read tests we see NFS outpacing iSCSI in every configuration.  All of the different configurations don’t really make a difference for reads in these tests.

CDM 4K Random Q32T1 Read

network-cdm-4k-q32t1-read

The random performance shows the same basic trend as the sequential test.  NFS continues to outpace iSCSI here as well.

CDM Sequential Read

network-cdm-seq-read

The trend continues with NFS outpacing iSCSI in sequential read tests, even at lower queue depths.

CDM 4K Random Readnetwork-cdm-4k-read

The trend gets bucked a bit here at a lower queue depth.  iSCSI seems to take the lead here…but ever don’t let the graph fool you.  It’s really immaterial.

CDM Sequential Q32T1 Writenetwork-cdm-seq-q32t1-write

Write performance is a totally different story.  Here we see the three factors that drive performance:  synchronous writes, SLOG, and media type.  NFS and iSCSI are inverse of each other by default.  NFS forces synchronous writes while iSCSI forces asynchronous writes.

Clearly asynchronous writes win out every time given the fire-and-forget nature.  The SLOG does help in a big way.  I do find it interesting that our S3700 SLOG seems to perform better on iSCSI than even the NVMe SSD.  Once we get to actual Essbase performance, we’ll see how this holds up.

CDM 4K Random Q32T1 Write

network-cdm-4k-q32t1-write

Random performance follows the same trend as sequential performance in our write tests.  iSCSI clearly fairs better for random performance so long as an SSD is involved.

CDM Sequential Write

network-cdm-seq-write

At a lower queue depth, the results follow the trend established in the higher queue depth write tests.  There is still an oddity in the SLOG outpacing the NVMe device.

CDM 4K Random Read

network-cdm-4k-write

In our final test, we see that random performance is just terrible across the board at lower queue depths.  Clearly much better with a SLOG or an NVMe device, but as expected, very slow.

That’s it for this post!  In our next post, we’ll take a look at the Anvil benchmark results while focusing on I/O’s per second.  I promise we’ll make it to actual Essbase benchmarks soon!

My First FreeNAS: Part 2 – Install, Test, and Configure
Hyperion EPM Week In Review: September 30, 2016

Leave a Reply

Your email address will not be published / Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.