Build a Homelab Dashboard: Part 8, FreeNAS

My posts seem to be getting a little further apart each week…  This week, we’ll continue our dashboard series by adding in some pretty graphs for FreeNAS.  Before we dive in, as always, we’ll look at the series so far:

  1. An Introduction
  2. Organizr
  3. Organizr Continued
  4. InfluxDB
  5. Telegraf Introduction
  6. Grafana Introduction
  7. pfSense
  8. FreeNAS

FreeNAS and InfluxDB

FreeNAS, as many of you know, is a very popular storage operating system.  It provides ZFS and a lot more.  It’s one of the most popular storage operating systems in the homelab community.  If you were so inclined, you could install Telegraf on FreeNAS.  There is a version available for FreeBSD and I’ve found a variety of sample configuration files and steps.  But…I could never really get them working properly.  Luckily, we don’t actually need to install anything in FreeNAS to get things working.  Why?  Because FreeNAS already has something built in:  CollectD.  CollectedD will send metrics directly to Graphite for analysis.  But wait…we haven’t touched Graphite at all in this series, have we?  No…but thanks to InfluxDB’s protocol support for Graphite.

Graphite and InfluxDB

To enable support for Graphite, we have to modify the InfluxDB configuration file.  But, before we get to that, we need to go ahead and create our new InfluxDB and provision a user.  If you take a look back at part 4 of this series, we cover this in more depth, so we’ll be quick about it now.  We’ll start by logging into InfluxDB via SSH:

influx -username influxadmin -password YourPassword

Now we will create the new database for our Graphite statistics and grant access to that database for our influx user:

CREATE DATABASE "GraphiteStats"
GRANT ALL ON "GraphiteStats" TO "influxuser"

And now we can modify our InfluxDB configuration:

sudo nano /etc/influxdb/influxdb.conf

Our modifications should look like this:

And here’s the code for those who like to copy and paste:

[[graphite]]
  # Determines whether the graphite endpoint is enabled.
  enabled = true
  database = "GraphiteStats"
  # retention-policy = ""
  bind-address = ":2003"
  protocol = "tcp"
  # consistency-level = "one"

Next we need to restart InfluxDB:

sudo systemctl restart influxdb

InfluxDB should be ready to receive data now.

Enabling FreeNAS Remote Monitoring

Log into your FreeNAS via the web and click on the Advanced tab:

Now we simply check the box that reports CPU utilization as a percent and enter either the FQDN or IP address of our InfluxDB server and click Save:

Once the save has completed, FreeNAS should start logging to your InfluxDB database.  Now we can start visualizing things with Grafana!

FreeNAS and Grafana

Adding the Data Source

Before we can start to look at all of our statistics, we need to set up our new data source in Grafana.  In Grafana, hover over the settings icon on the left menu and click on data sources:

Next click the Add Data Source button and enter the name, database type, URL, database name, username, and password and click Save & Test:

Assuming everything went well, you should see this:

Finally…we can start putting together some graphs.

CPU Usage

We’ll start with something basic, like CPU usage.  Because we checked the percentage box while configuring FreeNAS, this should be pretty straight forward.  We’ll create a new dashboard and graph and start off by selecting our new data source and then clicking Select Measurement:

The good news is that we are starting with our aggregate CPU usage.  The bad news is that this list is HUGE.  So huge in fact that it doesn’t even fit in the box.  This means as we look for things beyond our initial CPU piece, we have to search to find them.  Fun…  But let’s get start by adding all five of our CPU average metrics to our graph:

We also need to adjust our Axis settings to match up with our data:

Now we just need to set up our legend.  This is optional, but I really like the table look:

Finally, we’ll make sure that we have a nice name for our graph:

This should leave us with a nice looking CPU graph like this:

Memory Usage

Next up, we have memory usage.  This time we have to search for our metric, because as I mentioned, the list is too long to fit:

We’ll add all of the memory metrics until it looks something like this:

As with our CPU Usage, we’ll adjust our Axis settings.  This time we need to change the Unit to bytes from the IEC menu and enter a range.  Our range will not be a simple 0 to 100 this time.  This time we set the range from 0 to the amount of ram in your system in bytes.  So…if you have 256GB of RAM, its 256*1024*1024*1024 (274877906944):

And our legend:

Finally a name:

And here’s what we get at the end:

Network Utilization

Now that we have covered CPU and Memory, we can move on to network!  Network is slightly more complex, so we get to use the math function!  Let’s start with our new graph and search for out network interface.  In my case this is ix1, my main 10Gb interface:

Once we add that, we’ll notice that the numbers aren’t quite right.  This is because FreeNAS is reporting the number is octets.  Now, technically an octet should be 8 bits, which is normally a byte.  But, in this instance, it is reporting it as a single bit of the octet.  So, we need to multiply the number by 8 to arrive at an accurate number.  We use the math function with *8 as our value.  We can also add our rx value while we are at it:

Now are math should look good and the numbers should match the FreeNAS networking reports.  We need to change our Axis settings to bytes per second:

And we need our table (again optional if you aren’t interested):

And finally a nice name for our network graph:

Disk Usage

Disk usage is a bit tricky in FreeNAS.  Why?  A few reasons actually.  One issue is the way that FreeNAS reports usage.  For instance, if I have a volume that has a data set, and that data set has multiple shares, free disk space is reported the same for each share.  Or, even worse, if I have a volume with multiple data sets and volumes, the free space may be reporting correctly for some, but not for others.  Here’s my storage configuration for one of my volumes:

Let’s start by looking at each of these in Grafana so that we can see what the numbers tell us.  For ISO, we see the following options:

So far, this looks great, my ISO dataset has free, reserved, and used metrics.  Let’s look at the numbers and compare them to the above.  We’ll start by looking at df_complex-free using the bytes (from the IEC menu) for our units:

Perfect!  This matches our available number from FreeNAS.  Now let’s check out df_complex-used:

Again perfect!  This matches our used numbers exactly.  So far, we are in good shape.  This is true for ISO, TestCIFSShare, and TestNFS which are all datasets.  The problem is that TestiSCSI and WindowsiSCSI don’t show up at all.  These are all zVols.  So apparently, zVols are not reported by FreeNAS for remote monitoring from what I can tell.  I’m hoping I’m just doing something wrong, but I’ve looked everywhere and I can’t find any stats for a zVol.

Let’s assume for a moment that we just wanted to see the aggregate of all of our datasets on a given volume.  Well..that doesn’t work either.  Why?  Two reasons.  First, in Grafana (and InfluxDB), I can’t add metrics together.  That’s a bit of a pain, but surely there’s an aggregate value.  So I looked at the value for df_complex-used for my z8x2TB dataset, and I get this:

Clearly 26.4 MB does not equal 470.6GB.  So now what?  Great question…if anyone has any ideas, let me know, as I’d happily update this post with better information and give credit to anyone that can provide it!  In the meantime, we’ll use a different share that only has a single dataset, so that we can avoid this annoying math and reporting issues.  My Veeam backup share is a volume with a single dataset.  Let’s start by creating a singlestat and pulling in this metric:

This should give us the amount of free storage available in bytes.  This is likely a giant number.  Copy and paste that number somewhere (I chose Excel).  My number is 4651271147041.  Now we can switch to our used number:

For me, this is an even bigger number: 11818579150671, which I will also copy and paste into Excel.  Now I will do simple match to add the two together which gives a total of 16469850297712.  So why did we go through that exercise in basic math?  Because Grafana and InfluxDB won’t do it for us…that’s why.  Now we can turn our singlestat into a gauge.  We’ll start with our used storage number from above.  Now we need to change our options:

We start by checking the Show Gauge button and leave the min set to 0 and change our max to the value we calculated as our total space, which in my case is 16469850297712.  We can also set thresholds.  I set my thresholds to 80% and 90%.  To do this, I took my 16469850297712 and multiplied by .8 and .9.  I put these two numbers together, separated by a comma and put it in for thresholds: 13175880238169.60,14822865267940.80.  Finally I change the unit to bytes from the IEC menu.  The final result should look like that:

Now we can see how close we are to our max along with thresholds on a nice gauge.

CPU Temperature

Now that we have the basics covered (CPU, RAM, Network, and Storage), we can move on to CPU temperatures.  While we will cover temps later in an IPMI post, not everyone running FreeNAS will have the luxury of IPMI.  So..we’ll take what FreeNAS gives us.  If we search our metrics for temp, we’ll find that every thread of every core has its own metric.  Now, I really don’t have a desire to see every single core, so I chose to pick the first and last core (0 and 31 for me):

The numbers will come back pretty high, as they are in kelvin and multiplied by 10.  So, we’ll use our handy math function again (/10-273.15) and we should get something like this:

Next we’ll adjust our Axis to use Celsius for our unit and adjust the min and max to go from 35 to 60:

And because I like my table:

At the end, we should get something like this:

Conclusion

In the end, my dashboard looks like this:

This post took quite a bit more time than any of my previous posts in the series.  I had built my FreeNAS dashboard previously, so I wasn’t expecting it to be a long, drawn out post.  But, I felt as I was going through that more explanation was warranted and as such I ended up with a pretty long post.  I welcome any feedback for making this post better, as I’m sure I’m not doing the best way…just my way.  Until next time…


Around The Lab: Hyperion Home Lab January Update

Welcome to the first installment of a new on-going series about what’s up in my Hyperion Home Lab.  First, if you don’t already have your own home lab, why not?  Get started by checking out my guide on building your own.  And yes…I will get around to updating this series soon.  In the meantime, let’s take a look at what’s changed in the lab.

UPS…not just for shipping

I’ve updated the power configuration of the lab to increase the UPS capacity.  All servers are now connected to their own 1500VA UPS.  This means a total of four (4) UPS’s with a total capacity of 6000VA.  I can get roughly 20 minutes of time without power before things fall apart.  The next step will be automating the shutdown procedure after a few minutes of power loss.

More Drives

What about server stuff?  I’ve expanded my FreeNAS server to include an entirely new chassis devoted to storing drives.  This chassis is a Supermicro 45-bay with a pair of SAS2 expanders.  These are connected externally to the FreeNAS server to an LSI 9200-8e controller.  More on this later…

New Networking Goodies

In support of my FreeNAS fun and eventual VSAN implementation, I replaced my ailing Dell switch with a brand new X1052.  This switch is complete with 24 ports of RJ45 gigabit connections along with 4 ports of SFP+ 10G connections.  Each server is directly connected to the switch with SFP+ DAC’s.  Each server is also connected directly to the FreeNAS server using another SFP+ DAC.

While I made everything faster on the wired side, the wireless was still a bit of a challenge.  The majority of the house was just fine, but there were a few select locations that were very problematic.  I decided to price out having a network drop or two added.  While talking to a potential installer, he suggested I try out an extender.  I had tried an extender years before, but it didn’t work well.  Based on the installers feedback, I gave it another try.  I purchased a Netgear Nighthawk X4.

The extender has been nothing short of awesome.  My Nighthawk X6 has a pair of 5GHz radios and this extender allows me to isolate one of those and provide it with a dedicated radio signal.  This is about as great as it can get without dropping an actual network connection.  We’ve been using it for about a while now and I have nothing but great things to say.

Diagrams Are Cool Right?

Here’s what it all looks like in the form of a really bad Visio diagram:

What’s next?

As I continue my Essbase performance series and prepare for my Kscope17 presentation, I’m making some more modifications.  I’ve also finally run out of space on my old file server and I think we have finally reached a point where I can trust my FreeNAS box as my primary file server.  First, here’s a sneak peak at what’s coming in my benchmarking box:

That is an Oracle Flash Accelerator.  This very drive is found in many Exalytics servers out there.  It also happens to be a rebranded Intel P3605.  This particular model is the 1.6TB variety with some insane performance.  I’ll have a dedicated post for this SSD very soon.

In the meantime, I’ll be installing my capacity expansion to replace my existing file server next weekend when the drives arrive.  I’ve ordered 13 3TB drives.  I plan on using a pair of RAIDZ1 vdev’s in FreeNAS to give me 30TB of usable storage with a hot spare in the event of failure.  I may also be transitioning to a new VM backup strategy at the same time, but more on that another time.

Enough Nerding Out

And that’s it for this update on my Hyperion Home Lab.  I should have a few more posts coming in the near future with updates on much of what I’ve talked about broadly here.


My First FreeNAS: Part 2 – Install, Test, and Configure

Re-Introduction

It has been quite a while since my last post on my new FreeNAS build.  This project was placed on the back-burner while I had a lot going on.  I’m finally starting to get everything stabilized, so now I’m back at working on my new FreeNAS box.  You can view part 1 of this series here, but just as a quick re-cap, let’s talk about the system specs.  Here is a revised list (things in bold have been changed from Part 1):

  • SuperChassis 846TQ-R900B (with Supermicro 1200W model PWS-1K21P-1R)
  • (2) E5-2670 @ 2.6 GHz
  • Supermicro X9DR7-LN4F-JBOD
  • 256GB Registered ECC DDR3 RAM (16 x 16GB)
  • Noctua i4 Heatsinks
  • (5) Noctua NF-R8 (to bring the noise level down on the chassis)
  • (2) SanDisk Cruzer 16GB CZ33
  • (2) Intel S3500 80GB SSD’s
  • (1) Supermicro AOC-2308-l8e
  • (3) Full-Height LSI Backplates (for the AOC-2308’s and the P3605)
  • (6) Mini-SAS Breakout Cables
  • Intel P3605 1.6TB PCIe SSD
  • (9) 2TB HGST Ultrastar 7K3000 Hard Drives
  • (4) 10Gb Twinax DAC Cables
  • (2) Intel X520-DA2

So why did I make these changes?  The power supply was a no-brainer.  The R900B chassis was really, really loud.  I have another Supermicro 846 with a PWS-1K21P which is totally tolerable.  So I went ahead and picked one of those up from Ebay for pretty cheap.  I decided to get rid of the USB drives and replaced them with S3500’s.  I’ve had two USB sticks go bad on me in the last two months and was having some issues during the installation, so I found a pair of new S3500’s a great price.

Finally, I decided to use the eight (8) SATA 3G ports on the motherboard so that I could save a PCIe slot for the controller.  I just hooked up classic hard drives to those ports, which can’t even come close to saturating the 3G speeds.  This lets me add another PCIe SSD or an external SAS card to add more drives in an expander.

FreeNAS Installation

Now that the system is finally built and has gone through the burn-in process, it’s time to install FreeNAS.  This is a pretty simple process that is very well documented, so I won’t bore you with my screenshots.  Instead, go check out the FreeNAS site here:

FreeNAS Installation Documentation

I did install directly to a mirrored set of drives, which as you will read in the documentation is as easy as pressing the spacebar!

Once you get through the installation process, it will tell you the IP address you should use to go access the web-based GUI.  This is where the fun really starts.  Again, there are guides for everything that I did, so I’ll instead run through my high-level steps and provide links to the resources that I made use of.

Final Burn-In

Before I made it to the actual configuration of FreeNAS, I had to finish my burn-in process.  In part 1, we completed the burn-in on the majority of our hardware, but now we have to burn in the hard drives.  If you are using all SSD’s, you don’t have to worry about this.  If you have hard drives, even brand new hard drives, you should burn them in.  Luckily, there’s a great burn-in guide on the FreeNAS forums:

FreeNAS Burn-In Guide

FreeNAS Initial Configuration

Now that everything is tested and burn-in has been completed was set up a storage volume.  You really need at least one of these configured before you can move on to the rest of the steps.  I highly recommend reading up on ZFS if you haven’t already.  There’s a great guide on the FreeNAS forums for those of you completely new to ZFS.  Check it out here:

ZFS Primer Presentation

I set up a pair of volumes to start.  First I set up my classic hard drive volume.  I chose to use a striped set of mirrored disks to give me the best performance and redundancy for my benchmarking use.  This is basically what hardware RAID would could RAID 10.  I will use this volume to test network storage for my Essbase benchmarks.  I’ve also set up a single volume using my 1.6TB PCIe SSD.  Just to break up the text…you may remember this from part 1:FreeNAS ZFS zPool

FreeNAS ZFS zPool

Next I moved on to getting my network configuration up and running.  For me, this was setting up the global settings like DNS servers and the Gateway.  Then I moved on to set up the networking for my server.  Basically I added my main network interface and changed the IP to a static address.  Both of these topics are covered here:

FreeNAS Network Configuration Documentation

Now that the basic network configuration is complete, we can move on to a slightly  This one isn’t covered as well in the FreeNAS documentation, but a community member has written an excellent guide that I found very helpful.  As a bonus, it also helps you set up your first CIFS share for Windows users on your domain to share:

FreeNAS Active Directory Configuration Documentation

FreeNAS Advanced Configuration

And with that, we can finally move on to having FreeNAS support the on-going benchmarking effort!  There are two main network storage technologies that are by far the most common in the Hyperion EPM world:  NFS and iSCSI.  I started with NFS and created a pair of datasets: one on my hard drive volume and one on my NVMe volume.  I used this guide:

FreeNAS NFS Configuration with ESXi

The guide is a little out of date, but the main thing to remember is that you need to set the maproot property, or ESXi will not work properly.  I also have my root passwords set the same, which is a requirement.  I’m sure there is a better, more secure way to do this, but on my home lab, I’m content.

Once I completed the configuration of my NFS shares, I turned my attention to iSCSI.  NFS requires basically a single screen to set everything up.  iSCSI on the other hand, is a bit more involved.  I followed this guide:

FreeNAS iSCSI Configuration with ESXi

I also found another guide that has more detail, albeit older, and is from a much more trusted source:

Older FreeNAS iSCSI Configuration with ESXi

Conclusion and Next Steps

Now that I’ve made it through my initial setup of FreeNAS, I’ve had an opportunity to run a few quick benchmarks.  Here’s a benchmark of one of my NFS shares:

FreeNAS NFS Hard Drive with SLOG

And here’s one of my iSCSI shares:FreeNAS iSCSI Hard Drive with SLOG

 

I’ll have a lot more information on this in my next post.  We’ll dive into benchmarking each of the configurations and make a variety of changes to improve performance.  It should make for an interesting read for the truly nerdy out there!


My First FreeNAS: Part 1 – Build and Burn-In

Kscope16 is over, my parts have arrived, and its finally time to start my FreeNAS build.  Today I’m going to run through my actual build process and the start of my burn-in process.  Let’s start with the hardware…what did I order again?

  • SuperChassis 846TQ-R900B
  • (2) E5-2670 @ 2.6 GHz
  • Supermicro X9DR7-LN4F-JBOD
  • 256GB Registered ECC DDR3 RAM (16 x 16GB)
  • Noctua i4 Heatsinks
  • (5) Noctua NF-R8 (to bring the noise level down on the chassis)
  • (2) SanDisk Cruzer 16GB CZ33
  • (2) Supermicro AOC-2308-l8e
  • (3) Full-Height LSI Backplates (for the AOC-2308’s and the P3605)
  • (6) Mini-SAS Breakout Cables
  • Intel P3605 1.6TB PCIe SSD
  • (9) 2TB HGST Ultrastar 7K3000 Hard Drives
  • (4) 10Gb Twinax DAC Cables
  • (2) Intel X520-DA2

An here’s the pile of goodies:

Build01

I always start with the motherboard by itself:

Build02

Next up…the CPU(s):

Build03

CPU close-up:

Build04

Before we install the heatsinks, let’s install the memory.  The heatsinks are petty big and have a habit of getting in the way:

Build05

That’s a lot of memory…how about a close-up:

Build06

Now we can install the heatsinks:

Build07

Like I said…huge:

Build08

Now that we have all of the core components in place on the motherboard, let’s put it into our case:

Build09

Obviously, we have a quite a few other components to add (hard drives, add-in cards, etc).  But for now, I like to keep it simple for the burn-in process.  So how do we go about that?  For the basic hardware, there are two recommended steps.  Because memory is so important to FreeNAS, we have to make sure that our memory is in good working order.  For those of us purchasing used hardware, this is especially critical.  Once we have the memory tested, we will then test out our CPU’s to make sure that they are functional and to take a look at the temperatures.

So how do we do this?  You can download utilities like memtest86+ or cpustress and boot up directly using those tools.  But, being that I’m averse to additional work that has already been done by someone else, I just downloaded the latest Ultimate Boot CD.  This comes with a mega-ton of tools including the two I need to start with:  memtest86+ and cpustress.

You can download the ISO here.  Once you have downloaded the ISO, you have two choices.  You can use one of my favorite tools, Rufus, to burn the ISO to a USB thumb drive.  Then you can just boot from the thumb drive.  Your second option is the preferred option.  Hopefully you purchased server-class hardware for your FreeNAS box and that hardware has IPMI and Remote KVM.  If so, then you will likely be able to mount the ISO over the network and easily boot from the virtual media.  This is the option I went for.

My Supermicro board even has two options for this option (options on top of options!).  You can do this through the IPMI interface and mount an ISO from a share or you can use the iKVM to mount the ISO.  Connect to your server with iKVM and select Virtual Media and then Virtual Storage.

iKVM01

Switch to the CDROM&ISO tab, select ISO File from the drop-down, and click Open Image:

iKVM02

Select the ultimate boot CD image name and click Open:

iKVM03

Finally click Plug In:

iKVM04

Once we reboot (or boot up, if you have no other OS installed, it should just boot right in):

UBCD01

We’ll go down to Memory and select Memtest86+:

UBCD02

Memtest86+ is a somewhat newer release of a really old memory testing utility I have used for over a decade: Memtest86.  This release takes the older code and brings support for newer hardware and fixes a number of bugs.  Even still, it is pretty old.  It also takes a long…long time to run with 256GB of memory.  So I ran a single pass to start:

Build10

Once that first pass completed (roughly 24 hours), I focused in on stressing my CPU’s.  For this I used cpustress, also included in the Ultimate Boot CD.  I’m less familiar with this stressing tool, as I’ve always been more Windows focused and used tools like Prime95 for this purpose.  Again we boot into the Ultimate Boot CD:

UBCD01

This time we’ll select CPU and then CPUstress:

UBCD03

CPUstress should start up automatically:

UBCD04

This gives us one more menu…I just went with option 1:

UBCD05

Overall, it seems to work pretty well:

Build11

Now with that running lets take a look at the CPU temps:

Build13

The temps look pretty good for running wide open.  There appears to be headroom for the additional heat that will be generated by the hard drives that will be added to the system.  So how does power usage look?

Build12

The numbers look pretty good here.  Again…no drives, so this number will go up considerably by the time we are completely done.  I burned the CPU’s in for a little over 24 hours and then went back to Memtest86+.  I ran that for roughly four more days with no errors.  That’s all for today.  In our next post we’ll finally load up FreeNAS, get our controllers ready to go, and burn in our hard drives.