Archive for the ‘hardcore’ Tag

A gathering of links, part 1   Leave a comment


As I’ve mentioned previously, I’m not a big fan of regurgitating content from other blogs as I gather that you already subscribe to posts by Microsoft in particular but also posts on other blogs of note.

Still, this blog is one way of providing my colleagues with ongoing information about Hyper-V, Windows Server and System Center (or at least those components I find interesting) which means that I from time to time will post a round-up of links that I’ve collected.

There will be no real order or grouping to these links, at least not for now.

All hosts receive a zero rating when you try to place the VM on a Hyper-V cluster in Virtual Machine Manager – If you have a setup where you use VLAN isolation on one logical network and no isolation on another you could happen upon this known issue in VMM 2012 SP1.

Ben Armstrong posts about a nifty little trick allowing you to set your VMs resolution to something else than the standard ones. Useful when you run client Hyper-V.

Neela Syam Kolli, Program Manager for DPM, posted a really interesting article called “How to plan for protecting VMs in private cloud deployments?” There’s quite a bit of useful information in this post but also some solid advice on how to plan your VM to CSV ratio, a question that I’ve been asked by customers and colleagues alike so let me go into a bit more detail on this.

From the article (my emphasis):

Whenever a snapshot is taken on a VM, snapshot is triggered on the whole CSV volume. As you may know, snapshot means that the data content at that point in time are preserved till the lifetime of the snapshot. DPM keeps snapshot on the volume on PS till the backup data transfer is complete. Once the backup data transfer is done, DPM removes that snapshot. In order to keep the content alive, any subsequent writes to that snapshot will cause volsnap to read old content, write to a new location and write the new location to the target location. For ex., if block 10 is being written to a volume where there is a live snapshot, VolSnap will read current block 10 and write it to a new location and write new content to the 10th block. This means that as long as the snapshot is active, each write will lead to one read and two writes. This kind of operation is called Copy On Write (COW). Even though the snapshot is taken on a VM, actual snapshot is happening on whole volume. So, all VMs that are residing in that CSV will have the IO penalty due to COW. So, it is advisable to have as less number of CSVs as possible to reduce the impact of backup of this VM on other VMs.  Also, as a VM snapshot will include snapshot across all CSVs that has this VM’s VHDs, the less the number of CSVs used for a VM the better in terms of backup performance.

Despite the language being a bit on the rough side the way I interpret this is that it’s preferable to have as few VMs as possible per CSV due to the COW impact on all VHDs on a CSV. Additionally, keep all your VHDs for a VM on the same CSV as all CSVs that host a VHD for the VM you’re protecting will suffer the COW performance hit.

The response I’ve gotten from Microsoft and the community regarding number of VMs per CSV is “as many as possible until your storage can’t handle the load” which is perfectly logical but also very vague. If you use DPM to protect your VMs you now have a bit more to lean on when sizing the environment. There is of course a reason that we use CSV and not a 1:1 ratio of volumes to VHDs so don’t go overboard with this recommendation.

Moving on.

This one is pretty hardcore but interesting none the less; How to live debug a VM in Hyper-V. If nothing else, it shows that there’s still a use for COM-ports in a VM.

vNiklas is as always blogging relentlessly about how to use PowerShell for pretty much everything in life. This one is about how to re-size a VHDX while it’s still online in the VM.

If you want to know more about how NIC teaming in Windows Server 2012 works then Didier van Hoye takes a look at how Live Migration uses various configurations of the NIC teaming feature. A very informative post with a surprising result!

Another post from Ben Armstrong, this time he does some troubleshooting when his VMs hover longer than expected at 2 % when starting up. Turns out that it’s due to a name resolution issue.

Microsoft and Cisco have produced a white paper on the new Cisco N1000V Hyper-V switch extension. It’s not very technical, although informative if you need to know a little bit more about how the N100V and the Hyper-V switch relate to each other.

The Hyper-V Management Pack Extensions 2012 for System Center Operations Manager 2012/2012 SP1 gives you even more options when monitoring Hyper-V with SCOM.

I mentioned that the post about how to live debug a VM was pretty hardcore but I’d like to revise that statement. Compared to this document called “Hypervisor Top-Level Functional Specification 3.0a: Windows Server 2012” that post is as hardcore as non-fat milk. Let me quote a select piece of text from the specification:

The guest reads CPUID leaf 0x40000000 to determine the maximum hypervisor CPUID leaf (returned in register EAX) and CPUID leaf 0x40000001 to determine the interface signature (returned in register EAX). It verifies that the maximum leaf value is at least 0x40000005 and that the interface signature is equal to “Hv#1”. This signature implies that HV_X64_MSR_GUEST_OS_ID, HV_X64_MSR_HYPERCALL and HV_X64_MSR_VP_INDEX are implemented.

My thoughts exactly! And the specification is 418 pages long so it should last you all through your vacation.

Finally (as I’m out of time and starting to get to stuff that’s a bit old), Peter Noorderijk writes about problems using Broadcom or Emulex 10 Gbit NICs. They resolved the issue by adding Intel NICs but another workaround is to turn off checksum offloading. Updated Emulex and Broadcom drives should be expected.