I did it all for the IOPS (PernixData FVP)

So, I have been neglecting my other home lab area’s lately. vSphere, Horizon, vCloud, I need to update my hardware blog. I get it. I have just been really interested in the PernixData stuff. They aren’t paying me, I just like it. And now I know why. This is the next installment of my continuing series on PernixData FVP and Architect. Today, we get into the true meat of all of it.

Performance testing!

a10b07de6c4c10d2514de1c8723bd8fb

Let me set the stage for you, quickly. I have 2 physical ESXi hosts (Dell Precision 690’s, Dual Xeon 5160 @3.0Ghz, 32GB DDR2 667) attached to a 2960 (100meg) switch. My shared storage is a bare-metal FreeNAS box (Core2Duo procesor, 8GB of RAM, SATA storage). My FreeNAS box is setup with 2 x 100meg links in a LAGG with LACP on the switch. I have 2 LUN’s presented to ESXi:

1 x 250GB SSD (NFS)

1 x 500GB HDD (NFS)

For the IO testing, I am going to use the VMware IO Analyzer Fling. This is a wonderful little linux appliance that runs IOmeter and has a slick front-end to build tests, and view results. I will be running the same test sets across both of my LUN’s (HDD and SSD) with FVP enabled and disabled. My IO Analyzer VM will be used as the test host, and will be setup in Write-Through on the storage.

00 - IO Analyzer Home

Test Sets:

64K 100 read 100 random

Max IOPS

Max Write IOPS

SQL Server 64k

These are all test sets that come standard with IO Analyzer. I will be running the sets for 120 seconds each. I will break it down from a baseline set of tests to a set while accelerated. I’ll run all 4 sets on each LUN in each state. Then I’ll go over the results, and include some other info.

Baseline (No FVP)


LabNAS (500GB HDD – NFS):

64K 100 read 100 random:

09 - HDD - 64k 100 read 100 random

Max IOPS:

10 - HDD - Max IOPS

Max Write IOPS:

11 - HDD - Max Write IOPS

SQL Server 64k:

12 - HDD - SQL 64k


LabSSD (250GB SSD – NFS):

64K 100 read 100 random:

05 - SSD - 64k 100 read 100 random

Max IOPS:

06 - SSD - Max IOPS

Max Write IOPS:

07 - SSD - Max Write IOPS

SQL Server 64k:

08 - SSD - SQL 64k


Accelerated (FVP – RAM Flash – Write Through)


LabNAS (500GB HDD – NFS):

64K 100 read 100 random:

13 - HDD - With FVP - 64K 100 read 100 random

Max IOPS:

14 - HDD - With FVP - Max IOPS

Max Write IOPS:

15 - HDD - With FVP - Max Write IOPS

SQL Server 64k:

16 - HDD - With FVP - SQL 64k


LabSSD (250GB SSD – NFS):

64K 100 read 100 random:

01 - SSD - With FVP - 64K 100 read 100 random

Max IOPS:

02 - SSD - With FVP - Max IOPS

Max Write IOPS:

03 - SSD - With FVP - Max Write IOPS

SQL Server 64k:

04 - SSD - With FVP - SQL 64k



Results

While I only have 1 test result per set posted, the results were basically the same over a couple tests each. The results pretty much speak for themself. For example, on the SSD LUN, I ran the 64K 100 read 100 random. The total IOPS jumped from the 160’s without FVP to the 5000’s with FVP. Max read IOPS on the same LUN jump from 11,500’s to 23,800’s. One thing that surprised me was the Max IOPS test from the HDD LUN. I expected less IOPS than I got, really. It went from 11,000’s without FVP to the mid 26,000’s. Of course the jump in IOPS is expected, but I expected much less baseline IOPS on the HDD than the SSD. It is very obvious all around that FVP is doing it’s fair share of work in accelerating reads on both LUN’s.

As you can see, my write IOPS stayed the same. This is because the LUN’s are in write-through mode, as FVP Freedom doesn’t support write-back.


Bonus Graphs!

I found that during my tests, my FVP acceleration rate went through the roof. Before I used IOmeter, I saw no more than 40% acceleration, and that was not average. Typically, I will get 1 – 25%. I don’t do a lot of crazy stuff in my lab, so It isn’t going to be read or write heavy. As soon as I started doing this load testing, my acceleration rate was spiking up to the 90’s. Very cool to see those kinds of numbers.

zz-FVP Acceleration Rate

Another graph, shows the VM latency during my testing phase on the SSD. It is pretty staggering. It is extremely clear that the blips get much bigger when FVP is disabled. Since I don’t have the most expensive or fancy shared storage, it’s nice to know that a free software product is helping me get such better performance.

zzz-datastore latency

 


Conclusion

If you’re looking for a way to really maximize your performance in a small home lab like me, or need to breath new life into that old enterprise storage array, PernixData FVP will surely get the job done. And just to re-iterate, none of my tests included write-acceleration. FVP Freedom only allows for read-acceleration. All storage is in write-through mode. I’m sure my max write IOPS tests would be very different if I could enable write-back + 1 peer. Maybe I’ll get a full version at some point to give that a shot. Until then, FVP Freedom can be had for FREE. It supports up to 128GB of RAM for acceleration, in write-through. Get it here.

-vTimD

Note: My series on PernixData software is in no way sponsored by PernixData. The posts are all written by me, for knowledge, not compensation.