Next #VMugDK sold out in less than 8 hours – Add your name to the waiting list!

MV5BNDA5NDAzMzg1MF5BMl5BanBnXkFtZTgwOTY2MjU2NzE@._V1_SY317_CR3,0,214,317_AL_So its that time of the year again, when the Danish VMug team will hold their VMug and a movie show. And once again it will be held in the Palads theatre in central Copenhagen. As i said its sold out, but do add your name to the waiting list, as usually a bigger theatre room can be provided if the sponsors see enough people on the waiting lists:

This years theme is costumer presentations and wow what a nice lineup, 4 community sessions and a community panel discussion as well.

We’ll start off with a 45 minute session from fellow bloggers Morten G. Johansen and Martin Therkelsen from Wolseley.

After a Sponsor session and lunch, will be back with Ibrar Ashraf from SamPension

Again a sponsor session and the Michael Munk will be on stage, and who can forget his ranting session on NSX at last years show. Surprisingly enough he’s again listed as Zitcom, wonder if  a new job change has been completed ?

The last community session will be from Jakob Hartmann from Frederiksberg Forsyning, who will enlighten us about his path to vSphere 6.0, touching on the possibilities and barriers he’s encountered along the way.

Something new this year will be a Q&A panel, guessing like we’ve seen at VMworld with ask the vExperts/Expert bloggers/Industry Giants sessions. Those are usually very fun to attend and you get to ask the questions you’ve been sitting with, to a panel of experts who has been around in the Virtual World. The panel will consist of:

Really looking forward to this session, and hope that you will come forward and ask some questions, otherwise it wont be as fun and enlightening as it could be.

After a sponsor give-away, we’ll be ready for the movie. Last year Liselotte surprised us all by starting with the first few minutes of 50 shades of grey, before switching the movie over to Kingsmen. This year the movie is Batman vs Superman: Dawn of Justice

After the movie, there will be a #VBEERS please sign up if you plan to attend, so the VBEERS team can find a place that’s big enough for all of us.

Hope to see you all there!



vSphere 6.0

So it’s finally here!
The long-awaited release of vSphere 6.0, has been released, and will ready for download medio March we’ve been told. I’ve done a small write-up of some of the new features in this release.

Release cycles
As seen at VMworld, where many customers thought that VMware would release vSphere 6.0, VMware has adopted a strategy of longer release cycles of the core component vSphere. So instead of a yearly release we are most likely to see roughly 18 months release cycles instead. This is because the hypervisor needs to be stable all the time, and not something you’d need to upgrade all the time. On the other hand products like vRealize Operations or vRealize Automation Center will see shorter release cycles. I’m really liking this approach, as the hypervisor really is in the center of your data center core, and as such we need the absolute most stable product here.

Multi-processor Fault Tolerance
We’ve seen multiprocessor Fault Tolerance demoed at a few VMworlds so far, but now finally it’s here. with vSphere 6.0 you can now have Fault tolerance on VM’s with up to 4 vCPU’s. This finally opens up for the useful Fault Tolerance VM’s. I haven’t seen many critical VM’s with only 1 vCPU, even vCenter Server needs more than 1, so the use cases for the old Fault Tolerance were few and far apart.
A lot of older applications, haven’t been built with High Availability in mind and for this FT comes into play, and with the new 4 vCPU limit a lot more of older applications can be protected by FT as well. I’m guessing a lot of costumers will use this feature in their data centers. However the Bandwidth requirements for this will be quite steep, so cross data center FT might not be feasible just yet :). As with the old FT this won’t save you if your application corrupts, then both instances will be corrupt. For this you really need applications that were built for High Availability in mind.

Inter vCenter vMotion
This I think is one of the biggest new features of vSphere 6.0. The ability to vMotion between 2 vCenters is one thing a lot of people have been looking for, for a looong time. Moving VM’s without downtime to a new vCenter with newer vSphere, wasnt that easy if you deployed distributed switches. But now that really should be a thing of the past. A whole new set of design architectures should be set up now because of this feature.

Long distance vMotion
Another really nice new feature is long distance vMotion, where before you were limited to 5ms latency or 10ms in Enterprise+, you can now vMotion across links with up to 100ms of latency. For us Europeans that means we should vMotion vm’s across borders to neighboring countries or at last across the country. This opens up quite a few new scenarios for highly available infrastructure. In Denmark fx. we could vMotion between Seeland and Jutland, which would solve some of the Power issues we have :).

Web client
The last thing I will write about is the client. The new Webclient is much improved over the previous ones, and both looks and feels more like the C# client, which still is available in vSphere 6.0. Unfortunately its still not in HTML5, which would have been preferred, so the webclient could work on OS’s



PernixData FVP – My Impression

On Wednesday I attended a session at VMworld about PernixData FVP with Satyam Vaghani and Frank Denneman.

And I must say I do like their approach to the Flash idea. They want to make their solution available to all application that runs on top of vSphere. Meaning even all your old p2v’d systems can get a huge boost in IOPS with this.

I wont go into much detail on how it works, you can find that out on their webpage. But there are 2 things that I really found neat. Moving forward they want to offer not only Flash acceleration in their software but it will also be able to use the RAM you have in your hosts. Think about a 1TB RAM cache for IOPS, that doesn’t sit behind a storage controller!

The other thing they were working on is Fault Domains. I asked them about how you would deploy their solution in a Stretched Metro Cluster setup. I’ve worked on a setup with 2 sites, but where vSphere only sees 1, using affinity groups to separate the running VM’s. So the idea with Fault domains is this: you could have the VM’s of site 1 have their main cache in site 1, but with a replica on site 2, so if the entire site 1 fails, then vSphere HA can easily recover the VM’s  in site 2, even if the data hasn’t been persisted down to disks in site 1.



Cormacs blog on Pernix


#VMworld – My expectations

Summer is approaching fast, and soon this years VMworlds in San Francisco and Barcelona will be upon us. This brings with it, a lot of speculations on what will be unveiled and what will happen with old products.


As stated in an earlier post, I’ve submitted a Session to VMworld about vCops, and are right now waiting on June the 2nd, to hear whether or not that session has been accepted. Regardless of the outcome, I will attend the US VMworld in San Francisco again. Hotel, Flight and VMworld ticket has already been purchased, but a short trip to Barcelona might be required if the session is accepted. This will be my 5th VMworld.

This post is about my views on VMworld and what I think will happen there, it’s not based on any facts so don’t put any bets down, based upon what I write here.

Last year nearly 23.000 VMworld fans attended the US conference. That is a huge number, but whats more amazing was the fact that, if I recall correctly, nearly 70% of the attendees were first timers. If even more people attend this year, I fear the Moscone Center will be too small 🙂

So what can we expect from VMware in august. They have often announced the new version of vSphere, so that should suggest we would see them unveil vSphere 6.0 at the US conference. I however have heard talk that suggests otherwise. It suggests that VMware will focus more on stability when it comes to vSphere and lengthen the release cycles to 18-24 months, from the current 12. This in my eyes is great news, the base Virtualization layer must be very stable, as it is the foundation that all their other products build upon. So I don’t expect vSphere 6 till sometime early 2015.

I expect to see version 6 of vCops however. There has been a closed beta for a while now, and there are some improvements in there that are really needed. Like different licenses for different vCenters or Clusters. You might not want the biggest license for your development cluster, or a customer might not want to pay for the biggest license in your hosting center. I’m told that Custom reports has gotten a big lift.

NSX was the big talking point last VMworld, but it hasn’t really hit it off like VSAN has. I think this might be due to the lengthy beta phase of VSAN, and I’m hoping VMware will try to let people have a go at NSX before buying it. One of the features I would love to see in NSX was to switch the built-in firewall and load balancers to something else. But I’m guessing that a lot of at least financial and pharmaceutical customers would have to qualify the built-in versions. A faster adoption could be that the firewall could be a Checkpoint or Juniper. The same goes for the load balancing part an F5 or a Citrix Netscaler plug-able version would help with a lot of the customers.

Last year we had Train supported by Imagine Dragon at the US conference and Taio Cruz in Barcelona. That’s two widely different styles of music. But I’m actually out of guesses on which bands will be selected this year, guess we just have to wait and see 🙂

Food, drink and Snacks
One thing I’ve always missed at VMworld, was better catering. It seems like its being kept to the bare minimum and last year in SF I only tried the lunch once, it was really bad. Barcelona 2 years ago was also bad, which kinda baffles me considering the amount of great food you can find in Barcelona. Looking at TechEd or CiscoLive my colleges who come home from these, are actually amazed at the amount of food, drink and snacks that are available to them during the day, both healthy and no-so-healthy stuff. I remember having to bring my own Coca Cola to VMworld last year. VMware please step up the effort on this part, pretty much the only bad thing I have to say about VMworld.

My goal this year at VMworld is to try to collect as many T-shirts I can grab, and review them during VMworld, so hopefully if you find my reviews nice you can go grab the same shirts. If it really hits off, I’m hoping people would suggest where I could find great shirts too.

Hmm writing about this makes me hope that august comes much quicker! 🙂 I’m really looking forward to VMworld and hope to see a lot of you there!.





*disclaimer* This post is based on my findings so you might experience different stats, but the general guidelines are sound to my knowledge *disclaimer*

NUMA or Non Uniform Memory Access has been around in Intel processors since 2007 when they introduced their Nehalem processors.


VMware has since vSphere v5.0 supported that the guest OS is exposed to the NUMA of the underlying processors. It’s automatically enabled if you create a VM that has more than 8 vCPU’s, only requirement is that your VM is at hardware level 8. You can manually edit advanced settings, so that the NUMA topology is exposed even if you have a lower number of vCPU’s. However, it is strongly recommended that you clearly understand how that impacts your VM when you do that.

2x8 numa sockets and core

CPU hot-add and vNuma

One thing you have to be aware of when thinking about deploying machines that cross NUMA boundaries is that if you enable hot-add cpu on your guest, VMware hides the NUMA information from guest OS, meaning that then the guest OS can’t smartly place applications and memory on the same NUMA node. So for performance intensive systems I would recommend that you turn off the hot-add CPU feature. Hot-add memory is however still working.

Images show Coreinfo run on a 4 CPU machine with 4 cores before and after CPU hot-add was enabled.

Coreinfo 4x4 uden hotplug

4x4 med hotadd cpu

Coreinfo 4x4 med hotplug

Crossing a NUMA boundary.

NUMA is a memory thing, but you can cross a NUMA boundary more than one way. If you have a 4 way machine with 8 core CPU’s and 128 GB of memory, that gives a NUMA boundary at 8 vCPU’s and 32GB of ram, meaning if you create a VM with more than 8 vCPU’s OR more than 32GB’s of ram, then that VM might have to access memory from another CPU.

If you cross a NUMA boundary there is a penalty, which can be shown in part by Coreinfo.

numa penalty

This example is a 4 way guest with 8 cores on each socket for a total af 32 Cores, but runs on a 2×10 core system, so 12 of the cores are run in HT. It shows the  penalty that you’re given by access memory thats located on another node, and it has deemed that accessing local memory is the fastest. I have however seen some systems where Node 0 to Node 0 was slower than Node 1 to Node 1.

In normal environments that might not really be a problem as access to Storage or Networking is way slower than accessing memory on another CPU. But in high performance clusters this is something you should consider when building your physical infrastructure. However I’ve spoken to one customer who had seen up to a 35% degradation in overall performance by crossing a NUMA boundary, and because of that they only advises VM’s that can fit into 1 physical CPU and its memory.

Understand your hardware Configurations

One thing you really need to pay attention to is your hardware configuration. Setting the wrong Socket and Core configs in VMware compared to your physical hardware configuration can decrease performance a lot. Mark Achtemichuk has a nice blogpost which shows how different performance can be by selecting various settings in vSphere for Number of virtual sockets and Number of cores per socket.

On top of that your hardware vendor might not be aware that the config they’re selling you is crossing NUMA boundaries. I’ve eg bought 2 CPU blades from Dell that were put on 4-way motherboards, so I could add more CPU’s later. But then to keep the cost down, Dell used 4 GB memory blocks and distributed them evenly across all the banks on the motherboard. Meaning that half my memory can now only be accessed with a penalty. If memory performance is a needed in your environment, then the cost of using 8 GB memory blocks might be worth it.

Microsoft and NUMA support

Since Windows 2003 Enterprise and Datacenter editions, windows server has been NUMA-aware, meaning the OS can schedule its threads so processes that access the same areas of memory are put unto the same NUMA node, this helps reduce the penalty seen by crossing a NUMA boundary. Microsoft SQL Server has NUMA support since 2005, but it seems that is hasnt been fully supported until 2008 R2. But for SQL server it means that each database engine is started on its own NUMA node. Even if there is only 1 database engine it will attempt to start that engine on the second node. This is because the OS is allocating memory on the first node, so NODE 0 has less memory available to other applications than the other Nodes.

For SharePoint farms Microsofts best practice actually advices you NOT to cross NUMA boundaries, but adopt a scale-out option instead. Of course that means selling more SharePoint licenses, but make sure you do test how your performance is impacted when crossing a NUMA boundary on a SharePoint farm.