Essbase Performance: Part 5 – Network Storage (Anvil)

Introduction

Welcome to part five the Essbase Performance series that will have a lot of parts.  Today we’ll pick up where we left off on network storage baselines.  Before we get there, here’s a re-cap of the series so far:

Essbase Performance

In case you’ve forgotten, here’s the list of configurations that will be tested:

  • Eight (8) Hitachi 7K3000 2TB Hard Drives, four (4) sets of two (2) mirrors
  • Eight (8) Hitachi 7K3000 2TB Hard Drives, four (4) sets of two (2) mirrors with an Intel S3700 200GB SLOG
  • Eight (8) Hitachi 7K3000 2TB Hard Drives, four (4) sets of two (2) mirrors with sync=disabled (NFS) or sync=always (iSCSI)
  • One (1) Intel P3605 1.6TB NVMe SSD
  • One (1) Intel P3605 1.6TB NVMe SSD with sync=disabled (NFS) or sync=always (iSCSI)

And the four (4) datasets:

  • One (1) dataset to test NFS on the Hard Drive configurations
  • One (1) dataset to test iSCSI on the Hard Drive configurations
  • One (1) dataset to test NFS on the NVMe configurations
  • One (1) dataset to test iSCSI on the NVMe configurations

So what benchmarks were used?

  • CrystalDiskMark 5.0.2
  • Anvil’s Storage Utilities 1.1.0.337

Benchmarks

And the good stuff that you skipped to anyway…benchmarks!

As with the rest of the series, we’ll continue our flow.  We started with CrystalDiskMark and now we’ll move on to Anvil.  While Anvil will also provide MB/s metrics, we will focus on just the IO/s.  Let’s get started.

Anvil Sequential Read

network-anvil-seq-read

In our read tests everything is pretty well flat.  The NFS Hard Drive configuration seems to be lower than everything else, but at a low queue depth, we’ll consider that an outlier for now.

Anvil 4K Random Read

network-anvil-4k-read

The random performance at a low queue depth is also pretty flat.  The iSCSI NVMe device does seem to separate itself here.  We’ll see how it does at higher queue depths.

Anvil 4K Random QD4 Read

network-anvil-4k-qd4-read

At a queue depth of four, things are basically flat across the board.

Anvil 4K Random QD16 Readnetwork-anvil-4k-qd16-read

It seems that at higher queue depths, things still seem to stay relatively flat on the read side.  Let’s see what happens with writes.

Anvil Sequential Writenetwork-anvil-seq-write

As with our CDM results, write performance is a totally different story.  Here again we see the three factors that drive performance:  synchronous writes, SLOG, and media type.  NFS and iSCSI are inverse of each other by default.  NFS forces synchronous writes while iSCSI forces asynchronous writes.

Clearly asynchronous writes win out every time given the fire-and-forget nature.  The SLOG does help in a big way.  As in our CDM results, the S3700 SLOG still seems to perform better on iSCSI than even the NVMe SSD.  Once we get to actual Essbase performance, we’ll see how this holds up.

Anvil 4K Random Write

network-anvil-4k-write

Random performance follows the same trend as sequential performance in our write tests. At a low queue depth, SSD’s get us half-way to asynchronous performance, which is exciting.

Anvil 4K Random QD4 Write

 

network-anvil-4k-qd4-writeAs queue depth increases, the performance differential seem to stay pretty consistent.  Asynchronous performance is pulling away just a tad from the rest of the options.

Anvil 4K Random QD16 Write

network-anvil-4k-qd16-write

In our final test, we see that at much higher queue depths, asynchronous really pulls away from everything.  iSCSI seems to fair much better with the SLOG than the NVMe drive again.

So…can we get some Essbase benchmarks yet?  That will be our next post!

Hyperion EPM Week In Review: September 30, 2016
Hyperion EPM Week In Review: October 7, 2016

Leave a Reply

Your email address will not be published / Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.