Essbase Performance Series: Part 3 – Local Storage Baseline (Anvil)
Welcome to part three of a series that will have a lot of parts. In our we took a look at our test results using the CrystalDiskMark synthetic benchmark. Today we’ll be looking at the test results using a synthetic benchmark tool named Anvil. As we said in part two, the idea here is to see first how everything measures up in synthetic benchmarks before we get into the actual benchmarking in Essbase.
Also as discussed in part two, we have three basic local storage options:
- Direct-attached physical storage on a physical server running our operating system and Essbase directly
- Direct-attached physical storage on a virtual server using VT-d technology to pass the storage through directly to the guest operating system as though it was physically connected
- Direct-attached physical storage on a virtual server using the storage as a data store for the virtual host
As we continue with today’s baseline, we still have the following direct-attached physical storage on the bench for testing:
- One (1) Samsung 850 EVO SSD (250GB)
- Attached to an LSI 9210-8i flashed to IT Mode with the P20 firmware
- Windows Driver P20
- Four (4) Samsung 850 EVO SSD’s (250GB)
- Attached to an LSI 9265-8i
- Windows Driver 6.11-06.711.06.00
- Configured in RAID 0 with a 256kb strip size
- One (1) Intel 750 NVMe SSD (400GB)
- Attached to a PCIe 3.0 8x Slot
- Firmware 8EV10174
- Windows Driver 1.5.0.1002
- ESXi Driver 1.0e.1.1-1OEM.550.0.0.139187
- Twelve (12) Fujitsu MBA3300RC 15,000 RPM SAS HDD (300GB)
- Attached to an LSI 9265-8i
- Windows Driver 6.11-06.711.06.00
- Configured three ways:
- RAID 1 with a 256kb strip size
- RAID 10 with a 256kb strip size
- RAID 5 with a 256kb strip size
So what benchmarks were used?
- CrystalDiskMark 5.0.2 (see part two)
- Anvil’s Storage Utilities 1.1.0.337
And the good stuff that you skipped to anyway…benchmarks!
Now that we’ve looked at CrystalDiskMark, we’ll take a look at Anvil results. While Anvil results include reads and writes in megabytes per second, we’ll instead focus on Inputs/Outputs per Second (IOPS). Here we see that the Intel 750 is keeping pace nicely with the RAID 0 SSD array. In this particular test, even our traditional drives don’t look terrible.
Next up we’ll look at the random IOPS performance. So much for our traditional drives. Here we really see the power of SSD’s versus old-school technology. It is interesting that all three solutions hover pretty closely together. But this is likely a queue depth issue.
Let’s see how things look with a queue depth of four. Things are still pretty clustered here, but much higher across the board.
And now for a queue depth of 16. Now this looks better. The Intel 750 has, for the most part, easily outpaced the rest of the options. The RAID 0 SSD array looks pretty good here as well.
That’s it for the read tests. Next we move on to the write tests. We’ll again start with the sequential writes. Before we expand our queue depths, the RAID 0 SSD array is looking like the clear winner.
Our random write test seems to follow closely to our random read test. The Intel 750 is well in the lead with the other SSD options trailing behind. Also of interest, the Intel 750 seems to struggle when physically attached and as a data store in these tests. We’ll see if this continues.
When the queue depth increases, to four, we see the Intel 750 continue to hold its lead. The RAID 0 SSD array is still trailing the regular single SSD. As with the previous random test, the Intel 750 continues to struggles, though the physical test has improved.
Finally, we’ll check out the queue depth at 16. It looks like our physical Intel 750 has finally caught up to the passthrough. This feels like an odd benchmark result, so we’ll see how this looks in real Essbase performance. We also finally see that the RAID 0 SSD array has pulled ahead of the single drive by a large margin.
Next up…we’ll start taking a look at the actual Essbase performance for all of these hardware choices. That post is a few weeks away with Kscope rapidly approaching.
Comments