Category Archives: Pernix Data

This section is related to Pernix Data FVP software.

I did it all for the IOPS (PernixData FVP)

So, I have been neglecting my other home lab area’s lately. vSphere, Horizon, vCloud, I need to update my hardware blog. I get it. I have just been really interested in the PernixData stuff. They aren’t paying me, I just like it. And now I know why. This is the next installment of my continuing series on PernixData FVP and Architect. Today, we get into the true meat of all of it.

Performance testing!

a10b07de6c4c10d2514de1c8723bd8fb

Let me set the stage for you, quickly. I have 2 physical ESXi hosts (Dell Precision 690’s, Dual Xeon 5160 @3.0Ghz, 32GB DDR2 667) attached to a 2960 (100meg) switch. My shared storage is a bare-metal FreeNAS box (Core2Duo procesor, 8GB of RAM, SATA storage). My FreeNAS box is setup with 2 x 100meg links in a LAGG with LACP on the switch. I have 2 LUN’s presented to ESXi:

1 x 250GB SSD (NFS)

1 x 500GB HDD (NFS)

For the IO testing, I am going to use the VMware IO Analyzer Fling. This is a wonderful little linux appliance that runs IOmeter and has a slick front-end to build tests, and view results. I will be running the same test sets across both of my LUN’s (HDD and SSD) with FVP enabled and disabled. My IO Analyzer VM will be used as the test host, and will be setup in Write-Through on the storage.

00 - IO Analyzer Home

Test Sets:

64K 100 read 100 random

Max IOPS

Max Write IOPS

SQL Server 64k

These are all test sets that come standard with IO Analyzer. I will be running the sets for 120 seconds each. I will break it down from a baseline set of tests to a set while accelerated. I’ll run all 4 sets on each LUN in each state. Then I’ll go over the results, and include some other info.

Baseline (No FVP)


LabNAS (500GB HDD – NFS):

64K 100 read 100 random:

09 - HDD - 64k 100 read 100 random

Max IOPS:

10 - HDD - Max IOPS

Max Write IOPS:

11 - HDD - Max Write IOPS

SQL Server 64k:

12 - HDD - SQL 64k


LabSSD (250GB SSD – NFS):

64K 100 read 100 random:

05 - SSD - 64k 100 read 100 random

Max IOPS:

06 - SSD - Max IOPS

Max Write IOPS:

07 - SSD - Max Write IOPS

SQL Server 64k:

08 - SSD - SQL 64k


Accelerated (FVP – RAM Flash – Write Through)


LabNAS (500GB HDD – NFS):

64K 100 read 100 random:

13 - HDD - With FVP - 64K 100 read 100 random

Max IOPS:

14 - HDD - With FVP - Max IOPS

Max Write IOPS:

15 - HDD - With FVP - Max Write IOPS

SQL Server 64k:

16 - HDD - With FVP - SQL 64k


LabSSD (250GB SSD – NFS):

64K 100 read 100 random:

01 - SSD - With FVP - 64K 100 read 100 random

Max IOPS:

02 - SSD - With FVP - Max IOPS

Max Write IOPS:

03 - SSD - With FVP - Max Write IOPS

SQL Server 64k:

04 - SSD - With FVP - SQL 64k



Results

While I only have 1 test result per set posted, the results were basically the same over a couple tests each. The results pretty much speak for themself. For example, on the SSD LUN, I ran the 64K 100 read 100 random. The total IOPS jumped from the 160’s without FVP to the 5000’s with FVP. Max read IOPS on the same LUN jump from 11,500’s to 23,800’s. One thing that surprised me was the Max IOPS test from the HDD LUN. I expected less IOPS than I got, really. It went from 11,000’s without FVP to the mid 26,000’s. Of course the jump in IOPS is expected, but I expected much less baseline IOPS on the HDD than the SSD. It is very obvious all around that FVP is doing it’s fair share of work in accelerating reads on both LUN’s.

As you can see, my write IOPS stayed the same. This is because the LUN’s are in write-through mode, as FVP Freedom doesn’t support write-back.


Bonus Graphs!

I found that during my tests, my FVP acceleration rate went through the roof. Before I used IOmeter, I saw no more than 40% acceleration, and that was not average. Typically, I will get 1 – 25%. I don’t do a lot of crazy stuff in my lab, so It isn’t going to be read or write heavy. As soon as I started doing this load testing, my acceleration rate was spiking up to the 90’s. Very cool to see those kinds of numbers.

zz-FVP Acceleration Rate

Another graph, shows the VM latency during my testing phase on the SSD. It is pretty staggering. It is extremely clear that the blips get much bigger when FVP is disabled. Since I don’t have the most expensive or fancy shared storage, it’s nice to know that a free software product is helping me get such better performance.

zzz-datastore latency

 


Conclusion

If you’re looking for a way to really maximize your performance in a small home lab like me, or need to breath new life into that old enterprise storage array, PernixData FVP will surely get the job done. And just to re-iterate, none of my tests included write-acceleration. FVP Freedom only allows for read-acceleration. All storage is in write-through mode. I’m sure my max write IOPS tests would be very different if I could enable write-back + 1 peer. Maybe I’ll get a full version at some point to give that a shot. Until then, FVP Freedom can be had for FREE. It supports up to 128GB of RAM for acceleration, in write-through. Get it here.

-vTimD

Note: My series on PernixData software is in no way sponsored by PernixData. The posts are all written by me, for knowledge, not compensation.

Look out, Batman! More Dashboards! (PernixData Architect)

Welcome back to my continuing series on PernixData software. Today we are continuing on with Architect. The last installment was on how to upgrade FVP to 3.1 and enable Architect. Now, we’re going to go through the fun stuff. Dashboards. Now, I will tell you now, some of it is redundant from the FVP dashboards. All the verbiage is the same. In fact, when you go to FVP reporting, if Architect is enabled, it tells you to go there. Alright, let’s get going.

01 - Cluster Selection

We start off on a page that looks just like the FVP main dashboard. The cluster selection page. This allows you to see all clusters that you have in your environment. Nothing new here. Let’s move over to the Workload tab.

02 - Workload Tab

The Workload tab lets you see a visual representation of reads vs writes for a given cluster. You can see the block size frequency, and even change the sampled time range from anywhere from 10 minutes through days. There is several pre-set time ranges, or you can set your own.

03 - Per VM Breakdown

On that same chart, you can change from Read / Write Summary to Per VM Breakdown. This allows you to see another breakdown of VM’s based on Write Heavy, Write Moderate, Balanced, Read Moderate, and Read Heavy VM’s. Let’s move on to the next tab.

04 - VM Performance Plot

The VM  Performance Plot. While I am all for visualizations, this one was difficult for me to get my head around at first. Not because of the data plotted, but because of the visual representation. It’s simply a bunch of dots. Most of mine were bunched together. I assume with this visual that it gets better when you have more VM’s doing more things than I have in my lab. I’ll be setting up some read / write tests in a later installment which I hope will make some of the features of all this really open up. The next tab gets us into a concept I had to read up on a bit, as it is a common theme of PernixData.

05 - Working Set Estimation

The first instance of Working Set Estimation appears on this tab. Working set refers to the data used in an environment over any period of time. This could be any size data over any time period, really. There is a good write up on Working Set Sizes by Pete Koehler on the PernixData blog here.

06 - Summary Main

As you can see from our next screenshot, we went back to the Overview tab, and clicked into our Production cluster. This is the main Summary dashboard for the cluster. This will give you a broad overview of everything, just as the FVP dashboard did. You can change the workload visual from the Summary graph to the IO Frequency heat map seen below:

07 - Workload

Next we’ll move to the reporting tab. This is where all the performance data and graphs come together.

08 - Reporting

You can see the Performance Grid first. This has graphs for VM latency, IOPS, Throughput, Acceleration Ratae, Write Back Destaging, Population vs Eviction, and Workload. A pretty cool layout. It is even interactive. If you mouse over any of the graphs and move the mouse over a given point, it will popup the exact stats for that graph, as well as all others at once. So you can quick look at a given time and stats. All the rest of the reporting is very similar to the FVP reporting dashboards.

09 - Reporting cool graph

The intelligence tab. The reason we are all here. This is really where Architect shines. All the rest of the stuff there is just to give you the warm and fuzzies that you didn’t waste your money on FVP. While it’s nice having all the analytics, it doesn’t really DO anything.

10 - Working Set Estimation

The Working Set Estimation, we already talked about. It simply shows you your host breakdown of working sets. The next bit is the real cool part.

11 - Recommendations

Recommendations. What could you change to optimize your environment? Architect is finally pulling its own weight. Since I don’t have a full version of FVP, all my VM’s are in Write-Through mode. Based on its activity, Architect suggests that I put my vCenter VM in write-back mode. I’d love to have the full versions of FVP and Architect so I could really get into making the changes and opening it up. Maybe some day. For now, I’ll just look at the recommendation and know Architect is trying.

12 - Insight

The next option is the Insight. It allows you to get a visual representation of Reads / Writes Accelerated, IOs saved from datastore, and datastore bandwidth saved. This is all stuff you can see from the FVP dashboards, but this goes a bit more in depth. The cool part of this is seeing the Results in section. It shows you how your environment reacts with FVP and without. Now, currently, my two times are the same. Earlier after I first installed it, the time without FVP was much greater than with.

On to the final page where Architect earns it’s keep. Sizing.

13 - Acceleration Resource Sizing

Sizing is one of the most important things in design. You don’t want to buy more than you need, and you don’t want to need more than you have. This page shows you exactly how much resources you’d need to size FVP for all the different storage policy modes. Write Through, Write Back + 0, WB + 1, WB +2. It gives you a range of flash needed for each. This is very cool if you need to know exactly what you’re looking for in terms of flash. I see this as useful as you could get FVP and run it all in write through. Let Architect run for a while, then use the estimates to size your flash purchase to enable write-back mode. That way you aren’t doing any guessing, or over-purchasing / under-buying.

Hopefully this was enlightening to you. I am really liking the PernixData stuff here. It is really cool to play with, and in a tiny home lab like mine, I need all the storage help I can get. Stay tuned for IO testing coming very soon. My countdown clock is on for my Architect trial!

-vTimD

Note: My series on PernixData software is in no way sponsored by PernixData. The posts are all written by me, for knowledge, not compensation.

Enter, The Architect (No, not that Architect. PernixData.)

Now, I know what you’re thinking…

00 - Matrix

And, no. Not that Architect. Welcome to another installment in my continuing series of home lab adventures with @PernixData. Today’s installment will cover the upgrade of FVP from 3.0 to 3.1, and the activation of the Architect 30-day Trial. First off, what is Architect? According to PernixData, Architect is:

“PernixData Architect™ is a revolutionary software platform for holistic data center design, deployment, operations and optimization. It combines a best-in-class user experience with robust real-time analytics and design recommendations to deliver unprecedented visibility and control of virtualized applications and the underlying storage infrastructure.”

Now, is this something that I really need in my home lab to live? No. Do I want to look at all the shiny buttons and dashboards? You bet I do. So, let’s get this party started.

First and foremost, this is extremely similar to my first post, on the original install. Just adding a few more steps. This guide assumes you have already downloaded the new management server package, and downloaded the new VIB. You can use VUM to stage and add the VIB, but I am still doing this by hand, for the sake of being cool. I have the Management server package on my FVP server, and the .zip of the VIB uploaded to my shared storage.

Let’s run the management server upgrade. The very first thing we see is a prompt asking if we are sure if we want to upgrade. Am I sure? Yes.

01 - Confirm Upgrade

Clicking Yes gives us a quick InstallShield prompt where it is loading up the .msi files needed to run the installer.

02 - Loading stuff

Now, we’re in business. Let’s begin the seemingly-endless stream of clicking Next.

03 - Welcome

Once we start the upgrade, we’ll need to accept the license agreement. Not sure what it says. Let’s accept anyways.

04 - Accept License

The installer then advises which products are included in this installer. FVP and Architect are both included. It also advises that after the installation, a 30 day trial of each can be setup. I am already licensed for FVP Freedom, so the trial of Architect is really the only thing I am after.

05 - License Warning

Once we finish with that, we’re ready to start the upgrade process.

06 - Ready for Upgrade

It ran all the way through the process, then dumped me out to a prompt telling me I need to kill the FVP Management Server. I told it to automatically close, and attempt to restart the installer.

07 - Close Apps

Went all the way through the installer again. This time, it’s all good. We’re done! Well, not done, but no more Management server installer.

08 - Finish Upgrade

Now we need to do the VIB’s. Since we already have the 3.0 VIB’s on the box, we need to copy the uninstall script to /tmp and then run the script.

09 - Uninstall Host VIB

It took a couple minutes to run that script. Once it was done, I ran the VIB list command to verify that there was no PernixData VIB’s. Looks like we’re good.

10 - Confirm VIB removal

Now, we’ll simply run the VIB install command, pointing to the new VIB’s that i hosted on the shared storage. Rinse and repeat for every host in the cluster.

11 - Confirm VIB Install

Now we’re ready to get things running. Launch the Management Server from a browser, and login as you did before. You should get the same Hub that you did before.

12 - New Hub

We now need to verify licensing. Click the Licensing tab. When you go to the FVP tab, your existing licensing should be there. Mine was. Now I’ll go to the Architect tab, and click Start Trial.

13 - Architect License

The screen updated, and I can now see that I am licensed for FVP and Trial-Licensed for Architect! Everything went as according to plan!

14 - Architect Trial

Time to see what happens. Hold on to your butts…

15 - Lets do this

And we have… Architect!

16 - Architect Dashboard

The 30-day Trial countdown is on. Architect blogs to come!

-vTimD

Note: My series on PernixData software is in no way sponsored by PernixData. The posts are all written by me, for knowledge, not compensation.

Holy Dashboards, Batman! (PernixData FVP Freedom Edition)

I have had @PernixData  FVP Freedom Edition installed for a month (and a day.) It’s about time I post about it again. Last time I posted on FVP, I did an Install / Configure write-up based on my home lab. This time, we’re going to take a look into the management aspect of FVP Freedom. Now, I have to say, FVP is pretty idiot-proof when it comes to management. Really, I never HAVE to login to the management server console at all. Ever. Do I, though? Yes. Why, you ask? Because I like to look at shiny things. My setup is pretty simple. 2 ESX hosts, and 1 physical FreeNAS box. FreeNAS is baremetal, and connected to the network with 2 x 100meg links in a LAGG (With LACP at the switch). Lets take a look at what happens when you login to the console, and select “FVP” from the top left drop-down:

01 - FVP Dashboard

The first thing we’re presented with is the FVP Clusters menu. This is where you’ll get a list, and brief overview of all of your different FVP clusters. In my case, I really only have 1. If you have a larger installation with different workload clusters, this will be the main hub of drilling down. Not a lot of options here, but a good HUD of current status. Latency (VM average), IOPS, Throughput, and Acceleration Rate (the good stuff). Also shown here is the active warnings and alerts. As you see, I am perfect, and have none. Also, as I don’t have a crazy amount of acceleration resources and active data in my home lab, my acceleration rate is only 1%. Generally, when I have everything running and am working on things, my rate is around 20-30%. I don’t have any hard data on storage performance with and without FVP yet, so I’ll get to that next, hopefully. Let’s click into the cluster and see what we have next:

02 - FVP Cluster Overview

This is the main cluster dashboard. The epicenter of all those sweet sweet analytics. This dashboard contains groups of information on FVP Cluster Status, VM Accelleration Status, Performance, and Insight. This one-stop-shop is really all you need, unless you are trying to root out a problem, or check on more granular stats. One of my favorite things about this is the Insight area. Seeing just how many IO’s and DS bandwidth saved (total) over the life of the  installation. I’d imagine for large elaborate installs, these numbers would skyrocket. But, the other info, such as the VM acceleration status is cool. And knowing the current Performance status is nice, such as current average VM latency. The next stop on our journey, is the Reporting tab:

03 - Reporting - Latency

Reporting on here gives you more data than you can shake a stick at… If you wanted to shake a stick at data… In my environment example here, I don’t have crazy numbers. Not much going on. The first bit of reporting is the Cluster stats. You can check Latency, IOPS, Throughput, Acceleration Rate, and Pupulation vs Eviction, as well as the Audit Log. Population vs Eviction is to show you hot data added to the cache, vs data that went cold and was dropped. The breakdowns you can filter are VM observed, Local Acceleration, Network Acceleration, and Datastore. You can mouse over the graphs at any point to see the exact time and the metric reported. The Virtual Machine Reporting tab lets us get a bit more granular with the VM’s:

04 - Reporting - VM - Detail

Here on the Virtual Machine Reporting tab, we can actually see the Name, Resource Usage, Alerts / Warnings, and Status of the VM’s that live on the accelerated storage. In my example we can see that my vROps and NSX appliances are taking the most resources for hot data. I really like that you can see the individual performance metrics of latency, IOPS, throughput, and acceleration rate on a per-VM basis. You can also see the individual graphs for some of the metrics, as opposed to the total graphs:

05 - Reporting - VM - Latency

The final stop on our journey is the Advanced Configuration tab:

06 - Configuration Advanced

Here, you’ll find the blacklists, network config, and VADP VM’s. This is where you can tell FVP which VM’s are backup VM’s or ones to simply not accelerate. I am skipping the Acceleration Resources and Datastores tab, as I went over this during my install / configure blog.

Well, that ends our little journey into the FVP Freedom Dashboards. At some point, I would really like to get a full-version of the software and point it at some local SSD’s on each host and see what this can really do. Someday. Maybe. Thanks for playing along!

-vTimD

Note: My series on PernixData software is in no way sponsored by PernixData. The posts are all written by me, for knowledge, not compensation.

Home Lab: PernixData FVP Freedom

Hey there, kids! I am doing the home lab posts way out of order. I still need to update on the vSphere infrastructure, the Horizon Lab, the vCloud lab, and even the Windows infrastructure. More on that later. I really wanted to get the PernixData FVP Freedom setup since I just setup my shared storage. This process couldn’t have been easier. Let’s give it a run down:

Hardware: 

2 x Physical ESX hosts for VM’s

1 x Physical FreeNAS

RAM for Acceleration

Software:

vCenter Server

PernixData FVP Freedom

 

Install:

I don’t have VUM up and running yet, so I went the good old fashioned manual route for getting the VIB’s on the hosts. I uploaded the VIB package to a local DS on each host, then used SSH to get into the host and used esxcli to install the VIB’s.

VIB_datastoreSSH_VIB_Install

Doing the VIB install requires each host to be in maintenance mode. This is key. The next thing I needed was a service account. I gave this admin permissions to vCenter, and local admin on the FVP server.

AD_Service_Account

Lets get into the meat of the install. This is on a standalone Windows server, with 4vCPU and 8GB of RAM (Recommended). I didn’t get screenshots, but I created a database in SQL called prnxdata and a user called pernix (DB_owner). This is needed for the FVP management. First, you can do do a full or custom install. Full is best, but here is what Custom can get you:

01-custom-setup

Next we’ll need to point it at vCenter. As we set the service account as admin, and on the box, the permissions will be just what we need.

02-vcenter-user

The next thing we need to do is point the installer at SQL. You can run SQL express locally, but I have a SQL 2014 instance for all my lab DB’s.

03-SQL

The next step is setting up the management server. I opted for the FQDN over the IP, and the default management ports. After a few more clicks of “Next” I clicked the final “Install” button. It did all the rest for me.

04-Network-Settings

Yay, we’re done! (Kinda.) Now lets pull up the console. The first thing you want to do is drop in the license:

05-License

Now that we’re licensed, we need to create an FVP cluster. I selected my Production cluster, as the Storage cluster simply has 1 host that holds my NAS VM.

06-Cluster-Settings

As soon as we get the cluster, we have to give it an acceleration device. Normally, you’d be able to use the SSD, or PCIe Flash device, but since this is Freedom (Free Edition) it is RAM only (up to 128GB! That is a LOT for a free lab). I don’t have a ton of RAM, so I allocated 4GB each host (8GB of acceleration total).

07-Resource

Now that we have the cluster built, and the flash device set, we need to tell it what storage to accelerate. I selected my 3 NAS DS’s that I have setup into a DS cluster in vCenter. I just setup the shared storage, so I don’t have a lot on it. The SQL server and 1 other.

08-Datastores

And we have…a giraffe!… I mean, a functional FVP Console!

09-Console

And a reporting console, too!

10-Reporting

This was really a very painless process. While it did take me two days, it was simply because of only doing it in my free time. Overall, it took longer to enable SSH on the hosts and setup the service account than it did to do the rest. Now time to move all my VM’s from the local stores to the NAS DS cluster. I’ll have a follow up post on this soon with some updated stats and my day to day interactions. Now, go out and get your copy!

-vTimD

Note: My series on PernixData software is in no way sponsored by PernixData. The posts are all written by me, for knowledge, not compensation.