PernixData FVP in the Home Lab Part 3. (SUCCESS)

I must admit that I think I have made quite a few mistakes in this Install.

I was told that 6.0 is not supported by PernixData yet, but it has been known to work. So I installed a 5.5 VCSA instead, and voila it joined my AD without any problems that the 6.0 had. (Guess I need to try the 6.0 VCSA again).  Then Uninstalled and Reinstalled the PernixData Management Software, and still no plugin showed up. My friend @FrankBrix told me to look at the log files of the FVP software, and yes it would obviously there. The logfiles clearly showed that the software couldn’t connect to the SQL server, and I Discovered I had entered the wrong password!. One thing there, I do think the Installed could have checked that instead of just writing it to a config file. After correcting that, the FVP plugin worked on 5.5.

I thought that I would give 6.0 a shot again before configuring anything in FVP, and I could also get the FVP plugin to the 6.0 vCenter. However it kept giving me authentication errors, each time I tried anything with FVP. So I went back to 5.5 (now my 4.th vCenter install) and followed the guide to create an FVP cluster. After that I tried to add resources to my cluster, but it kept saying that “No PernixData compatible hosts have been detected…” So I double checked that the VIB was actually installed and I tried rebooting the ESXi, but to no avail.

Again @FrankBrix to the rescue. We setup a WebEx and in like 30 seconds after he took over my screen he had solved my problem. After reinstalling and creating datacenters and clusters 4 times, the last time I had accidently forgotten to put my host INTO the cluster. No wonder FVP couldnt get any hosts.

With that fixed PernixData started its magic. And WOW I must say, I can already feel a big difference from the last days without caching, and I’m looking forward to seeing the write Cache in action once I get all my machines up to speed again.

And damn here 1½ hours after it was created look at these stats

FVP in action

57.000 IOPS saved from my small little 4 disk Synology, no wonder I can feel a big difference.

FVP Latency

And the Latency really has gotten down as well. you can see a Big spike at around 10:22PM from the datastore, but the VM never saw it. And this is from a single cheap Kingston SSD. Will have to try this out in a heavier environment than my homelab sometime soon. I will post more stats when this has been running for some time.

Once again a big thanks to @frankbrix http://www.vfrank.org/ for the help.

 

PernixData FVP in the Home Lab Part 1.

So I finally got around to upgrading my homelab to 32Gb ram, so I can run the vCenter all the time, which is needed for PernixData’s FVP solution. Also gotten a cheap Kingston v300 120Gb SSD for testing.

Been running for 2 weeks with vFlash Read Cache from VMware which seriously speeded up my homelab. However i did run in to one Caveat.

I had let the VCSA used some of the vflash as all the other servers, however i couldn’t start up my VCSA after a total shutdown of my homelab (to install the extra RAM).
Failing with a “Could not create vFlash cache: msg.vflashcache.error.VFC_FAILURE”. As its the vCenter Server that gives the vFlash out to the other server it seems that it can’t use it itself. I might be mistaken in this as I have not tested it again.

I found @h0bbel’s article on it, and removed vFlash from the vCenter Server and vupti it could boot, and after it had booted up, the rest of my servers could be booted normally.

With previously only 16Gb of ram, I had let the host use 30Gb of the 120Gb of the flash to Swap, and that was way faster than using my NAS. However it was still alot slower than after i had upgraded it to 32Gb of ram. It left with with roughly 90Gb of SSD to use for caching.

One thing I found annoying about vFRC from VMware is that it is per VMDK, meaning i had to edit each machine and set aside some part of the SSD for caching for that particular vmdk. I’d much rather have it use the SSD to boost the entire datastore, instead of trying to figure out how much each of the vmdk’s should have. As i have read Duncan’s tweets about it, that will be added in a coming version of vFRC.

As I have written earlier I was lucky enough to be selected as a PernixPro, and one of the nice benefits of that, is a NFR license to FVP. So that is what I’m going to install and write about in Part 2 of this blog post.

PernixData FVP – My Impression

On Wednesday I attended a session at VMworld about PernixData FVP with Satyam Vaghani and Frank Denneman.

And I must say I do like their approach to the Flash idea. They want to make their solution available to all application that runs on top of vSphere. Meaning even all your old p2v’d systems can get a huge boost in IOPS with this.

I wont go into much detail on how it works, you can find that out on their webpage. But there are 2 things that I really found neat. Moving forward they want to offer not only Flash acceleration in their software but it will also be able to use the RAM you have in your hosts. Think about a 1TB RAM cache for IOPS, that doesn’t sit behind a storage controller!

The other thing they were working on is Fault Domains. I asked them about how you would deploy their solution in a Stretched Metro Cluster setup. I’ve worked on a setup with 2 sites, but where vSphere only sees 1, using affinity groups to separate the running VM’s. So the idea with Fault domains is this: you could have the VM’s of site 1 have their main cache in site 1, but with a replica on site 2, so if the entire site 1 fails, then vSphere HA can easily recover the VM’s  in site 2, even if the data hasn’t been persisted down to disks in site 1.

Links

PernixData

Cormacs blog on Pernix

 

Flash Storage – My take.

Everyone is taking about Flash these days, and I suspect the coming #VMworld will be swarming with new startups entering the Flash storage or Hybrid Storage market.

So I though I’d give you my ideas about Flash in a normal sized virtualized environment.

The setup I work with often consists of 2 EMC VNX5700 in a Metro-cluster setup with EMC VPLEX’s in front of them, also known as a Stretched Datacentre.

Having recently upgraded to this from an aging HP EVA 6400 the performance boost from the VNX’s is quite huge. And the Tiering actually moved alot of dead data to SATA, which helped that databases gain a nice boost.

But at the moment we’re nowhere near the capacity of the VNX’s. And actually it seems to be overperforming somewhat. I’ve created a 10 disk, disk group with 2xraid 5 for a small exchange environment. All 10 disks are 15.000 RPM FC drives. If we set the max IOPS per disk to around 200, that should max give me 2000 IOPS from that disk group. However during backups I see it consistently doing 6000 IOPS for around 2 hours. At first I found this odd, but verified that the VM also saw 6000 IOPS during that period.

This is due to caching, and a nice read-ahead algorithm. Both the disk itself has a small Cache, the VNX’s has a cache + some of the SSD’s are used for the VNX FAST cache. On top of that the VPLEX’s also has a Cache. So I’m getting more IOPS from the physical disks than I should :).

So at the moment I have nothing running that makes the VNX’s come anywhere near their performance limits. However we’re in the process of building a few Datawarehouses, and I suspect/hope that these will be able to put a bigger load on the VNX’s.

Should we ever hit the limits of the VNX’s there are currently quite a few options.

And this is at the moment the road I would take:

1. The Datawarehouses are currently mostly read intensive, so sticking a few SSD’s into the ESXi Hosts and enabling vSphere Flash Read Cache.

2. The ETL machines are a bit more write intensive so to help out on that I would definitely go for PernixData’s FVP solution, being both a read and write cache. And I’ve only heard good things about them from fellow vmug’ers who had this in their production environment.

3. If SSD’s alone isn’t enough, sticking Fusion IO cards or the equivalent in the machines should give another big boost. And the good thing about these is, that they also work with PernixData’s FVP solution.

4. If all else fails and we can’t get enough IO’s into the hosts, something like All Flash Arrays, might be an option. But that would really have to be tested performance wise. And these are very easy to put behind the VPLEX’s and presenting those to the ESXi hosts.

So I’m fairly certain our performance scale up options are quite good.

Looking forward to VMworld starting next week in San Francisco!.

Great VMUG Meeting in Denmark

On April 3rd VMUG Denmark held its first vMug of the year, the place was a repeat of a nice setting that was done last year as well. It was held in a Cinema and afterwards there were a showing of the new Captain America: The Winter Soldier Movie. More than 100 vMuggers had claimed a ticket for this VMUG so the cinema was filled to capacity.

It started off with Nicolai Sandager bidding us all welcome, talked about the next VMUGs in june 19th and november 20th. There will also be a VMUG in Jutland but the date hasn’t been set yet. He proceeded to give a shoutout to the 11 freshly awarded Danish vExperts, and seemingly on purpose omitting Frank Brix on the list on the screen :).

The first presentation, was done by one of the sponsors of the event Bitdefender, who talked about their antivirus products. To be honest that was the low point of the day. Not that many people really care about antivirus, its one of those things you have to do, and causes problems once in a while, but no more than that. I’m thinking its kinda hard to make such a topic interesting for a room full of VMware geeks.

The next presentation was something entirely different. Mads Fog, one of a new vExperts, did a very interesting presentation on his ventures into getting vSphere to run on Apple hardware. Seems like a real niche thing, but i’m guessing it has some use cases around the world. You can read more about Mads’ work here

macpro rack mount

After a break PernixData founder and CEO Poojan Kumar took the stage with Frank Brix as his Wingman doing a product demo. Poojan took us through the basics of how PernixData was created “2 guys and a Powerpoint”. And gave us a really nice presentation on PernixData FVP. The product actually seems very nice, but I havent seen many Danish companies adopting it yet. But it shows great promise, and I do believe that i could find more than one use case for FVP at my job.

Next up was Scott Lowe who gave a very good introduction to Cloud Networking. And even though the presentation simply was called “NSX – Scott Lowe” most of the time was spent on talking about what you would require of a technology that was deemed Cloud Networking. For those of us who aren’t working with networking on a daily basis this was a really great introduction. And of course it’s always a great help that Scott is a such a good presenter.

scott lowe

After his presentation Scott had a meeting with the Danish VCDX Study group, while the rest of us networked.

The last presentation of the day was done by Lego Geek Simon Gallagher about his vTardis platform, and its evolution from a noisy and heat producing Compaq server to his latest version that ran on his way more wife friendly 32Gb Laptop. vTardis is his lab setup for vSphere, running nested ESXi’s inside VMware Workstation. This actually seems like a very good way to do test labs. 2 of the key take-aways from this was, Autodeploy is your friend, and DO invest in an SSD disc for your lab, otherwise you’ll end up waiting for hours for stuff to boot.

That concluded the days presentation and after a run to grab Coke and Popcorn it was time for Captain America: The Winter Soldier. In my opinion very american movie with a lot of action, which let you switch off your brain and just watch 🙂

The day ended with a vBeers event, in which more than 30 vMuggers partook, a great conclusion on another great VMUG by the Danish VMUG-team. Looking forward to the next VMUG