EVO:Rail – first Impressions

Yesterday at the Keynote, one of the new offerings VMware announced was EVO:Rail, A Hyperconverged Infrastructure. This means OEMS have created small 2U formfactor rail mountable machines, that each hold 4 ESXi hosts. VMware has then provided the software that goes on top.

So no VMware is NOT selling us Hardware, we have to go to the OEM vendors like Dell, EMC, Fujitsu, Inspur, Net One Systems and SuperMicro. to get these.

Licensed in the SKU is all the software you need to get a SDDC up and running in around 15 minutes from its been racked and cabled. For full details on whats in an EVO:Rail offering  look at Duncan’s post under Links.

So who is this for ? People who wants to get to close SDDC nirvana fast. And people starting out might find this very tempting. I don’t see myself using EVO:Rail at my workplace. But small companies that, for whatever reason, don’t want a Cloud solution, could find EVO:Rail very promising. As it contains everything you need to start and has a simple interface for you to use. Everything installs itself.

This is the first offering, but they were hinting at Rack-scale or Datacentre scale versions of this, and that might be promising to bigger companies. EVO:Rack Tech Preview

For me personally, its not the process of getting the Hardware and Software installed, that is the biggest problem with going toward SDDC. The problem is getting all your tweaks and various administrative procedures implemented in the Software. I would be nice if you could standardize those administrative procedures to fit something like EVO:Rail, but that is usually a far larger battle than getting the money for the software and hardware.

I’m sure we’ll see a lot more press about EVO:Rail in the coming months.

Links:

EVO:Rail at Yellow-bricks

Chad Sakac’s piece on EVO:Rail (Long read)

Mike Laverick’s guide to EVO:Rail

Eric Sloof on EVO:Rail

Flash Storage – My take.

Everyone is taking about Flash these days, and I suspect the coming #VMworld will be swarming with new startups entering the Flash storage or Hybrid Storage market.

So I though I’d give you my ideas about Flash in a normal sized virtualized environment.

The setup I work with often consists of 2 EMC VNX5700 in a Metro-cluster setup with EMC VPLEX’s in front of them, also known as a Stretched Datacentre.

Having recently upgraded to this from an aging HP EVA 6400 the performance boost from the VNX’s is quite huge. And the Tiering actually moved alot of dead data to SATA, which helped that databases gain a nice boost.

But at the moment we’re nowhere near the capacity of the VNX’s. And actually it seems to be overperforming somewhat. I’ve created a 10 disk, disk group with 2xraid 5 for a small exchange environment. All 10 disks are 15.000 RPM FC drives. If we set the max IOPS per disk to around 200, that should max give me 2000 IOPS from that disk group. However during backups I see it consistently doing 6000 IOPS for around 2 hours. At first I found this odd, but verified that the VM also saw 6000 IOPS during that period.

This is due to caching, and a nice read-ahead algorithm. Both the disk itself has a small Cache, the VNX’s has a cache + some of the SSD’s are used for the VNX FAST cache. On top of that the VPLEX’s also has a Cache. So I’m getting more IOPS from the physical disks than I should :).

So at the moment I have nothing running that makes the VNX’s come anywhere near their performance limits. However we’re in the process of building a few Datawarehouses, and I suspect/hope that these will be able to put a bigger load on the VNX’s.

Should we ever hit the limits of the VNX’s there are currently quite a few options.

And this is at the moment the road I would take:

1. The Datawarehouses are currently mostly read intensive, so sticking a few SSD’s into the ESXi Hosts and enabling vSphere Flash Read Cache.

2. The ETL machines are a bit more write intensive so to help out on that I would definitely go for PernixData’s FVP solution, being both a read and write cache. And I’ve only heard good things about them from fellow vmug’ers who had this in their production environment.

3. If SSD’s alone isn’t enough, sticking Fusion IO cards or the equivalent in the machines should give another big boost. And the good thing about these is, that they also work with PernixData’s FVP solution.

4. If all else fails and we can’t get enough IO’s into the hosts, something like All Flash Arrays, might be an option. But that would really have to be tested performance wise. And these are very easy to put behind the VPLEX’s and presenting those to the ESXi hosts.

So I’m fairly certain our performance scale up options are quite good.

Looking forward to VMworld starting next week in San Francisco!.