Essbase Performance Series: Part 1 – Introduction

Welcome to the first in a likely never-ending series about Essbase performance.  To be specific, this series will be designed to help understand how the choices we make in Essbase hardware selection affect Essbase performance.  We will attempt to answer questions like Hyper-Threading or not, SSD’s or SAN, Physical or Virtual.  Some of these things we can control, some of them we can’t.  The other benefit of this series will be the ability to justify changes in your organization and environment.  If you have performance issues and IT want’s to know how to fix it, you will have hard facts to give them.

As I started down the path of preparing this series, I wondered why there was so little information on the internet in the way of Essbase benchmarks.  I knew that part of this was that every application is different and has significantly different performance characteristics.  But as I began to build out my supporting environment, I realized something else.  This is a time consuming and very expensive process.  For instance, comparing Physical to Virtual requires hardware that is dedicated to the purpose of benchmarking.  That isn’t something you find at many, if any clients.

As luck would have it, I have been able to put together a lab that allows me the ability to do all of these things.  I have a dedicated server for the purpose of Essbase benchmarking.  This server will go back and forth between physical and virtual and various combinations of the two.  Before we get into the specifics of the hardware that we’ll be using, let’s talk about what we hope to accomplish from a benchmarking perspective.

There are two main areas that we care about that relate to Essbase performance.  First, we have the back-end performance of Essbase calculations.  When I run an agg or a complex calculation, how long does it take?  Second, we have the front-end performance of Essbase retrieves and calculations.  This is a combination of how long end-user queries take to execute and how long user-executed calculations take to complete.  So what will we be testing?

Storage Impact on Back-End Essbase Calculations

We’ll take a look at impact our options in storage have on Essbase calculation performance.  Storage is our slowest bottleneck, so we’ll start here to find the fastest solution that we can use for the next set of benchmarks.  We’ll compare each of our available storage types three ways: a physical Essbase server, a virtual Essbase server using VT-d and direct attached storage, and a virtual Essbase server using data stores.  Here are the storage options we’ll have to work with:

  • Samsung 850 EVO SSD (250GB) on an LSI 9210-8i
  • Four (4) Samsung 850 EVO SSD’s in RAID 0 on an LSI 9265-8i (250GB x 4)
  • Intel 750 NVMe SSD (400GB)
  • Twelve (12) Fujitsu MBA3300RC 15,000 RPM SAS HDD (300GB x 12) in RAID 1, 1+0, and RAID 5

CPU Impact on Back-End Essbase Calculations

Once we have determined our fastest storage option, we can turn our attention our processors.  The main thing that we can change as application owners is the Hyper-Threading settings.  Modern Intel processors found in virtually all Essbase clients support it, but conventional wisdom tells us that this doesn’t work out very well for Essbase.  I would like to know what the cost of this setting is and how we can best work around it.  ESXi (by far the most common hypervisor) even gives us some flexibility with this settings.

Storage Impact on Front-End Essbase Query Performance

This one is a little more difficult.  Back-End calculations are easy to benchmark.  You make a change, you run the calculation, you check the time it took to execute.  Easy.  Front-End performance requires user interaction, and consistent user interaction at that.  So how will we do this?  I can neither afford load runner, nor have the time to attempt to learn this complex tool.  Again, as luck would have it, we have another option.  Our good friends at Accelatis have graciously offered to allow us to use their software to perform consistent front-end benchmarks.

Accelatis has an impressive suite of performance testing products that will allow us to test specific user counts and get query response times so that we can really understand the impact of end-user performance.  I’m very excited to be working with Accelatis.

CPU Impact on Front-End Essbase Query Performance

This is an area where we can start to see more about our processors.  Beyond just Hyper-Threading, which we will still test, we can look at how Essbase is threading across processors and what impact we can have on that.  Again, Accelatis will be key here as we start to understand how we really scale Essbase.

So what does the physical server look like that we are using to do all of this?  Here are the specs:

Processor(s)(2) Intel Xeon E5-2670 @ 2.6 GHz
MotherboardASRock EP2C602-4L/D16
Memory128 GB - (16) Crucial 8 GB ECC Registered DDR3 @ 1600 MHz
ChassisSupermicro CSE-846E16-R1200B
RAID ControllerLSI MegaRAID Internal SAS 9265-8i
Solid State Storage(4) Samsung 850 EVO 250 GB on LSI SAS in RAID 0
Solid State Storage(2) Samsung 850 EVO 250 GB on Intel SATA
Solid State Storage(1) Samsung 850 EVO 250 GB on LSI SAS
NVMe Storage(1) Intel P3605 1.6TB AIC
Hard Drive Storage(12) Fujitsu MBA3300RC 300GB 15000 RPM on LSI SAS in RAID 10
Network AdapterIntel X520-DA2 Dual Port 10 Gbps Network Adapter

You can see specs of the full lab supporting all of the testing here.  And now, because I promised benchmarks, here are a few to start with:

Physical Server, Samsung EVO 850 x 4 in RAID 0 on an LSI 9265-8i

Physical-850EVO-X4-RAID0-9265-8i-CDM

Physical-850EVO-X4-RAID0-9265-8i-Anvil

Physical Server, Intel 750 NVMe SSD

Physical-Intel750-CDM

Physical-Intel750-Anvil

Well…that’s fast.  In our next post in the series, we’ll look at benchmarking all of our baseline storage performance for Physical, Virtual with VT-d, and Virtual with Data stores.  This will be our baseline for the next post after that about actual Essbase performance.  In the meantime, I’ll also be working towards getting access to some fast network storage to test that against all of our direct and virtual options.  In the meantime, let’s try out a graph and see how it looks:

CrystalDiskMark 5.0.2 Read Comparison:

Physical-CDM-Chart-Read

CrystalDiskMark 5.0.2 Write Comparison:

Physical-CDM-Chart-Write

The EPM Week In Review: Week Ending May 21, 2016
The EPM Week In Review: Week Ending May 28, 2016

Comments

Leave a Reply

Your email address will not be published / Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.