#VMworld T-Shirts Part 1

So once again VMWorld is on, and I’ve been around the Solutions Exchange once to collect T-shirts. There are probably quite a few that I didn’t get, so feel free to tweet me new ones.

VMworld front

Name: VMWorld 2015 Official T-shirt
Model: Hanes Tagless 100% Preshrunk Cotton
Size: XXL
Quality: Once again VMware does use a very nice quality shirt, that feels soft and looks like it can withstand quite a few washings.
Print: This year the print is very simple, just the Logo and VMworld 2015
Score: 7/10
Conclusion: A bit boring shirt, but very nice quality

Pernix Front

Name: PernixData
Model: Bella + Canvas 100% Cotton
Size: 3XL
QualityAgain PernixData has spent a bit too little to create a good quality, which is very sad because they always have nice concepts with T-shirts
Print: A Very nice Starwars X representing the X from PernixData ,which was well themed with their Party. Best I’ve seen this year (Starwars Fan)
Score: 6/10
Conclusion: Have to only give this shirt 6 out of 10 because of the quality. I hope they do better next time around, because as I’ve written before, you do want to create Shirts people will wear more than once, and this feels like it will fall apart after a few washes. But very nice that they have them up to 3XL 🙂

Violin Front

Name:  Violin Memory
Model: American apparel 100% Cotton
Size: XL 😦
QualityA very nice quality shirt, probably the best quality I’ve seen this year.
Print: A Grateful Dead cover, saying I saw a dead Disk in my Datacenter
Score: 8/10
Conclusion: Giving this shirt an 8 out of 10 for quality and print, however that they didn’t make them in 2XL drags down.

Coho Front

Name: CohoData
Model: Gildan Softstyle 100% Cotton
Size: 2XL
QualityA reasonably nice quality Shirt, that however doesn’t feel like it will keep in shape after a few washes, but very soft.
Print: An almost Back to the Future looking print that says Born to Perform. And the back a Simple Coho Data logo.
Score: 7/10
Conclusion: Dragging down is the quality, but It might change after I have washed it a few times.  

PernixData FVP in the Home Lab Part 3. (SUCCESS)

I must admit that I think I have made quite a few mistakes in this Install.

I was told that 6.0 is not supported by PernixData yet, but it has been known to work. So I installed a 5.5 VCSA instead, and voila it joined my AD without any problems that the 6.0 had. (Guess I need to try the 6.0 VCSA again).  Then Uninstalled and Reinstalled the PernixData Management Software, and still no plugin showed up. My friend @FrankBrix told me to look at the log files of the FVP software, and yes it would obviously there. The logfiles clearly showed that the software couldn’t connect to the SQL server, and I Discovered I had entered the wrong password!. One thing there, I do think the Installed could have checked that instead of just writing it to a config file. After correcting that, the FVP plugin worked on 5.5.

I thought that I would give 6.0 a shot again before configuring anything in FVP, and I could also get the FVP plugin to the 6.0 vCenter. However it kept giving me authentication errors, each time I tried anything with FVP. So I went back to 5.5 (now my 4.th vCenter install) and followed the guide to create an FVP cluster. After that I tried to add resources to my cluster, but it kept saying that “No PernixData compatible hosts have been detected…” So I double checked that the VIB was actually installed and I tried rebooting the ESXi, but to no avail.

Again @FrankBrix to the rescue. We setup a WebEx and in like 30 seconds after he took over my screen he had solved my problem. After reinstalling and creating datacenters and clusters 4 times, the last time I had accidently forgotten to put my host INTO the cluster. No wonder FVP couldnt get any hosts.

With that fixed PernixData started its magic. And WOW I must say, I can already feel a big difference from the last days without caching, and I’m looking forward to seeing the write Cache in action once I get all my machines up to speed again.

And damn here 1½ hours after it was created look at these stats

FVP in action

57.000 IOPS saved from my small little 4 disk Synology, no wonder I can feel a big difference.

FVP Latency

And the Latency really has gotten down as well. you can see a Big spike at around 10:22PM from the datastore, but the VM never saw it. And this is from a single cheap Kingston SSD. Will have to try this out in a heavier environment than my homelab sometime soon. I will post more stats when this has been running for some time.

Once again a big thanks to @frankbrix http://www.vfrank.org/ for the help.

 

PernixData FVP in the Home Lab Part 1.

So I finally got around to upgrading my homelab to 32Gb ram, so I can run the vCenter all the time, which is needed for PernixData’s FVP solution. Also gotten a cheap Kingston v300 120Gb SSD for testing.

Been running for 2 weeks with vFlash Read Cache from VMware which seriously speeded up my homelab. However i did run in to one Caveat.

I had let the VCSA used some of the vflash as all the other servers, however i couldn’t start up my VCSA after a total shutdown of my homelab (to install the extra RAM).
Failing with a “Could not create vFlash cache: msg.vflashcache.error.VFC_FAILURE”. As its the vCenter Server that gives the vFlash out to the other server it seems that it can’t use it itself. I might be mistaken in this as I have not tested it again.

I found @h0bbel’s article on it, and removed vFlash from the vCenter Server and vupti it could boot, and after it had booted up, the rest of my servers could be booted normally.

With previously only 16Gb of ram, I had let the host use 30Gb of the 120Gb of the flash to Swap, and that was way faster than using my NAS. However it was still alot slower than after i had upgraded it to 32Gb of ram. It left with with roughly 90Gb of SSD to use for caching.

One thing I found annoying about vFRC from VMware is that it is per VMDK, meaning i had to edit each machine and set aside some part of the SSD for caching for that particular vmdk. I’d much rather have it use the SSD to boost the entire datastore, instead of trying to figure out how much each of the vmdk’s should have. As i have read Duncan’s tweets about it, that will be added in a coming version of vFRC.

As I have written earlier I was lucky enough to be selected as a PernixPro, and one of the nice benefits of that, is a NFR license to FVP. So that is what I’m going to install and write about in Part 2 of this blog post.

PernixData FVP – My Impression

On Wednesday I attended a session at VMworld about PernixData FVP with Satyam Vaghani and Frank Denneman.

And I must say I do like their approach to the Flash idea. They want to make their solution available to all application that runs on top of vSphere. Meaning even all your old p2v’d systems can get a huge boost in IOPS with this.

I wont go into much detail on how it works, you can find that out on their webpage. But there are 2 things that I really found neat. Moving forward they want to offer not only Flash acceleration in their software but it will also be able to use the RAM you have in your hosts. Think about a 1TB RAM cache for IOPS, that doesn’t sit behind a storage controller!

The other thing they were working on is Fault Domains. I asked them about how you would deploy their solution in a Stretched Metro Cluster setup. I’ve worked on a setup with 2 sites, but where vSphere only sees 1, using affinity groups to separate the running VM’s. So the idea with Fault domains is this: you could have the VM’s of site 1 have their main cache in site 1, but with a replica on site 2, so if the entire site 1 fails, then vSphere HA can easily recover the VM’s  in site 2, even if the data hasn’t been persisted down to disks in site 1.

Links

PernixData

Cormacs blog on Pernix

 

Flash Storage – My take.

Everyone is taking about Flash these days, and I suspect the coming #VMworld will be swarming with new startups entering the Flash storage or Hybrid Storage market.

So I though I’d give you my ideas about Flash in a normal sized virtualized environment.

The setup I work with often consists of 2 EMC VNX5700 in a Metro-cluster setup with EMC VPLEX’s in front of them, also known as a Stretched Datacentre.

Having recently upgraded to this from an aging HP EVA 6400 the performance boost from the VNX’s is quite huge. And the Tiering actually moved alot of dead data to SATA, which helped that databases gain a nice boost.

But at the moment we’re nowhere near the capacity of the VNX’s. And actually it seems to be overperforming somewhat. I’ve created a 10 disk, disk group with 2xraid 5 for a small exchange environment. All 10 disks are 15.000 RPM FC drives. If we set the max IOPS per disk to around 200, that should max give me 2000 IOPS from that disk group. However during backups I see it consistently doing 6000 IOPS for around 2 hours. At first I found this odd, but verified that the VM also saw 6000 IOPS during that period.

This is due to caching, and a nice read-ahead algorithm. Both the disk itself has a small Cache, the VNX’s has a cache + some of the SSD’s are used for the VNX FAST cache. On top of that the VPLEX’s also has a Cache. So I’m getting more IOPS from the physical disks than I should :).

So at the moment I have nothing running that makes the VNX’s come anywhere near their performance limits. However we’re in the process of building a few Datawarehouses, and I suspect/hope that these will be able to put a bigger load on the VNX’s.

Should we ever hit the limits of the VNX’s there are currently quite a few options.

And this is at the moment the road I would take:

1. The Datawarehouses are currently mostly read intensive, so sticking a few SSD’s into the ESXi Hosts and enabling vSphere Flash Read Cache.

2. The ETL machines are a bit more write intensive so to help out on that I would definitely go for PernixData’s FVP solution, being both a read and write cache. And I’ve only heard good things about them from fellow vmug’ers who had this in their production environment.

3. If SSD’s alone isn’t enough, sticking Fusion IO cards or the equivalent in the machines should give another big boost. And the good thing about these is, that they also work with PernixData’s FVP solution.

4. If all else fails and we can’t get enough IO’s into the hosts, something like All Flash Arrays, might be an option. But that would really have to be tested performance wise. And these are very easy to put behind the VPLEX’s and presenting those to the ESXi hosts.

So I’m fairly certain our performance scale up options are quite good.

Looking forward to VMworld starting next week in San Francisco!.