Archive for the ‘network’ Tag

A gathering of links, part 3   Leave a comment


Sorry for the lack of content. I have something I can write about, I think, but work is getting in the way.

For now, a gathering of links instead.

Cripes! I need to do these more often, this took me forever.

If you find these useful, please rate this blog post or leave a comment. There’s really no need for me to spend my morning doing this if no-one’s going to read it. 🙂

Real world scenario issues with VMQ   Leave a comment


Last week Microsoft published part 1 in an article series about VMQ, detailing how VMQ works and trying to clear up some misconceptions about the technology.

It’s well worth the read but the main reason I mention it is because a colleague of mine ran into an issue that’s very much related to VMQ.

The customer is a large hosting provider and they experienced poor network performance when doing backups and live migrations over their 10 Gbit infrastructure. Very important to know though is that they use a virtual switch in Hyper-V to provide vNICs  for backup and LM.

As the Microsofts article states, you won’t get 10 Gbit out of a Hyper-V switch:

Many people have reported that with the creation of a vSwitch they experience a drop in networking traffic drop from line rate on a 10Gbps card to ~3.5Gbps. This is by design. With RSS you have the benefit of using multiple queues for a single host so you can interrupt multiple processors. The downside of VMQ is that the host and every guest on that system is now limited to a single queue and therefore one CPU to do their network processing in the host. On server-grade systems today, about 3.5Gbps is amount of traffic a single core can handle.

Their bandwidth was somewhat lower, around 3 Gbps, but that’s most likely due to having older hardware.

I’m not sure how they’re going to resolve this but my suggestion was to use separate, physical, NICs for backup and, if needed, for LM.

As I don’t have enough information about how they’ve designed their Hyper-V environment I’m not sure if they’ve scaled up or out. If they scale out, bandwidth for LM should be less of an issue as fewer VMs lives on each host but at a certain point you’re still going to need bandwidth (unless you plan on patching your hosts continuously).

The takeaway from this is that when designing high-end environments, it pays to know the nuts and bolts of the technology you’re using.

Posted 20 September, 2013 by martinnr5 in FYI, Technical

Tagged with , , , , , ,

A gathering of links, part 2   Leave a comment


I’m not sure that I should keep using the “part n” moniker when naming these posts but for now that’s the best I got. We’ll see what happens further down the road.

During my vacation and the couple of weeks I’ve been working I’ve collected quite a few interesting links:

  • Steven Ekren describes in detail a new feature in Hyper-V 2012 (that I’ve missed) that Live Migrates a VM if a critical VM network fails.
  • Jose Barreto explains how to manage the new SMB3 features in Windows Server 2012 R2 through PowerShell.
  • Thomas Maurer has a step-by-step post on how to set up the new network virtualization gateway in Hyper-V 2012 R2.
  • Ben Armstrong details how to use Powershell in order to keep your Hyper-V replicas up to date with the source VM.
  • If you’re interested in Dell’s new PowerEdge VRTX cluster in a box, check out this article on the Microsoft Storage team blog.
  • Over at Hyper-V.nu Marc van Eijk has a really interesting article series on how to set up Hyper-V hosts using bare metal deployment in Windows Server 2012 R2 and System Center VMM 2012 R2. So far part 1 and part 2 have been posted.
  • Didier Van Hoye takes a thorough look at RDMA and not only talks the talk, he walks the walk as well. Funny and informative, as always.
  • vNiklas rescues missing VMs after a storage migration job went haywire. Strange enough he didn’t use one line of Powershell though. 🙂
  • Ben Armstrong shows how to import VMs to a new Hyper-V server without encountering issues due to incompatibilities by using some clever Powershell.
  • If you still need more RDMA check out Thomas Maurers post on the subject.
  • If you want to use Powershell in order to copy files to a VM using the new RDP over SMBus functionality in 2012 R2, vNiklas got you covered.
  • Another post from Thomas Maurer, this time he explains the features of the Cisco UCS Manager version 1.0.1 add-in for VMM and how to install it.
  • Finally, a post about CSV cache from Elden Christensen over at the Failover Clustering and Network Load Balancing Team Blog.

Posted 13 September, 2013 by martinnr5 in A gathering of links, Documentation, Elsewhere, FYI, Technical

Tagged with , , , , , ,

A gathering of links, part 1   Leave a comment


As I’ve mentioned previously, I’m not a big fan of regurgitating content from other blogs as I gather that you already subscribe to posts by Microsoft in particular but also posts on other blogs of note.

Still, this blog is one way of providing my colleagues with ongoing information about Hyper-V, Windows Server and System Center (or at least those components I find interesting) which means that I from time to time will post a round-up of links that I’ve collected.

There will be no real order or grouping to these links, at least not for now.

All hosts receive a zero rating when you try to place the VM on a Hyper-V cluster in Virtual Machine Manager – If you have a setup where you use VLAN isolation on one logical network and no isolation on another you could happen upon this known issue in VMM 2012 SP1.

Ben Armstrong posts about a nifty little trick allowing you to set your VMs resolution to something else than the standard ones. Useful when you run client Hyper-V.

Neela Syam Kolli, Program Manager for DPM, posted a really interesting article called “How to plan for protecting VMs in private cloud deployments?” There’s quite a bit of useful information in this post but also some solid advice on how to plan your VM to CSV ratio, a question that I’ve been asked by customers and colleagues alike so let me go into a bit more detail on this.

From the article (my emphasis):

Whenever a snapshot is taken on a VM, snapshot is triggered on the whole CSV volume. As you may know, snapshot means that the data content at that point in time are preserved till the lifetime of the snapshot. DPM keeps snapshot on the volume on PS till the backup data transfer is complete. Once the backup data transfer is done, DPM removes that snapshot. In order to keep the content alive, any subsequent writes to that snapshot will cause volsnap to read old content, write to a new location and write the new location to the target location. For ex., if block 10 is being written to a volume where there is a live snapshot, VolSnap will read current block 10 and write it to a new location and write new content to the 10th block. This means that as long as the snapshot is active, each write will lead to one read and two writes. This kind of operation is called Copy On Write (COW). Even though the snapshot is taken on a VM, actual snapshot is happening on whole volume. So, all VMs that are residing in that CSV will have the IO penalty due to COW. So, it is advisable to have as less number of CSVs as possible to reduce the impact of backup of this VM on other VMs.  Also, as a VM snapshot will include snapshot across all CSVs that has this VM’s VHDs, the less the number of CSVs used for a VM the better in terms of backup performance.

Despite the language being a bit on the rough side the way I interpret this is that it’s preferable to have as few VMs as possible per CSV due to the COW impact on all VHDs on a CSV. Additionally, keep all your VHDs for a VM on the same CSV as all CSVs that host a VHD for the VM you’re protecting will suffer the COW performance hit.

The response I’ve gotten from Microsoft and the community regarding number of VMs per CSV is “as many as possible until your storage can’t handle the load” which is perfectly logical but also very vague. If you use DPM to protect your VMs you now have a bit more to lean on when sizing the environment. There is of course a reason that we use CSV and not a 1:1 ratio of volumes to VHDs so don’t go overboard with this recommendation.

Moving on.

This one is pretty hardcore but interesting none the less; How to live debug a VM in Hyper-V. If nothing else, it shows that there’s still a use for COM-ports in a VM.

vNiklas is as always blogging relentlessly about how to use PowerShell for pretty much everything in life. This one is about how to re-size a VHDX while it’s still online in the VM.

If you want to know more about how NIC teaming in Windows Server 2012 works then Didier van Hoye takes a look at how Live Migration uses various configurations of the NIC teaming feature. A very informative post with a surprising result!

Another post from Ben Armstrong, this time he does some troubleshooting when his VMs hover longer than expected at 2 % when starting up. Turns out that it’s due to a name resolution issue.

Microsoft and Cisco have produced a white paper on the new Cisco N1000V Hyper-V switch extension. It’s not very technical, although informative if you need to know a little bit more about how the N100V and the Hyper-V switch relate to each other.

The Hyper-V Management Pack Extensions 2012 for System Center Operations Manager 2012/2012 SP1 gives you even more options when monitoring Hyper-V with SCOM.

I mentioned that the post about how to live debug a VM was pretty hardcore but I’d like to revise that statement. Compared to this document called “Hypervisor Top-Level Functional Specification 3.0a: Windows Server 2012” that post is as hardcore as non-fat milk. Let me quote a select piece of text from the specification:

The guest reads CPUID leaf 0x40000000 to determine the maximum hypervisor CPUID leaf (returned in register EAX) and CPUID leaf 0x40000001 to determine the interface signature (returned in register EAX). It verifies that the maximum leaf value is at least 0x40000005 and that the interface signature is equal to “Hv#1”. This signature implies that HV_X64_MSR_GUEST_OS_ID, HV_X64_MSR_HYPERCALL and HV_X64_MSR_VP_INDEX are implemented.

My thoughts exactly! And the specification is 418 pages long so it should last you all through your vacation.

Finally (as I’m out of time and starting to get to stuff that’s a bit old), Peter Noorderijk writes about problems using Broadcom or Emulex 10 Gbit NICs. They resolved the issue by adding Intel NICs but another workaround is to turn off checksum offloading. Updated Emulex and Broadcom drives should be expected.

TechEd Europe 2013 – Virtual Networking   Leave a comment


This post is going to be the toughest one to write as virtual networking, called Hyper-V Networking by Microsoft,  is something that I haven’t had a lot of time to work with. It’s clear that Microsoft are heavily invested in Software Defined Networking (SDN) as quite a bit of work has been done in the R2 releases of Windows Server 2012 and System Center VMM 2012.

An overall theme to Microsoft’s work with SDN (and 2012 R2 in general, for that matter) is that of the three clouds; the Private Cloud, the Service Provider Cloud and Azure. They all need to work in harmony, requiring as little effort from the customer as possible to get up and running.

If we start with Windows Server 2012 R2 there’s a new teaming mode called “Dynamic” that chops up the network flow into what Microsoft calls flowlets in order to be able to distribute the load of one continuous flow over multiple team members.

Hyper-V Networking (HNV) is now supported on a network team.

NVGRE, the technique used in Windows Server to implement network virtualization, can be offloaded to network cards with this feature. VMQ in combination with NVGRE will only work on NICs with the NVGRE task offload capability.

HNV is now a more integral part of a Hyper-V virtual switch which allows third-party extensions to see traffic flow from both consumer addresses (CA) as well as provider addresses (PA). HNV now also dynamically learns about new addresses in CA which means that 2012 R2 supports both DHCP servers as well as guest clusters on a virtualized network.

On that note it’s worth mentioning that the basic version of Cisco’s extension of the Hyper-V switch, called 1000V, is completely free to download and use – essentially turning your Hyper-V switch into a Cisco switch.

The Hyper-V switch has improved ACL and can now be set on port as well as on IP. In addition to this it’s also stateful.

A virtual machine can now use RSS inside the actual VM. This is called vRSS.

Microsoft was keen to point out that there had been a lot of work done in order to make SDN not only perform better but als easier to troubleshoot. There’s a new Powershell cmdlet called TestNetConnection that is ping, traceroute and nslookup all rolled into one and 2012 R2 allows you to ping a PA by using “ping -p”

In addition to this there’s a new Powershell cmdlet that allows you to test connectivity between VMs on the CA space.

2012 R2 boasts a replacement for the tried and true NetMon, called Message Analyzer. This new tool allows you, among a lot of other things, to in near real time monitor network traffic on a remote host.

There’s a built-in multi-tenant gateway between CA and PA in 2012 R2 that should be able to scale up to 100 tenants. IPAM in Windows Server 2012 R2 can now handle guest networks and can be integrated with VMM 2012 R2.

Continuing on with VMM 2012 R2 there is now support for managing top-of-rack switches from within VMM through the OMI standard. So far only Arista has announced support for this but other hardware vendors should follow. Look for the Microsoft certification logo.

This allows for a number of interesting features in VMM, one that was demonstrated was the ability to inspect the configuration of a switch and if needed remedy it from within VMM.

I’m not sure how many of my customers that can use all these features, and SDN in particular, but hopefully I’ll be able to experiment with this in my lab sometime this autumn in order to get some more experience with it.

Problems when modifying VM with invalid virtual network in SCVMM 2012   Leave a comment


Ok, this is a break from the type of posts I’m planning for this blog but since I ran into this just now I thought I’d publish it here just in case some one else finds themselves in the same situation.

I’ve re-designed the lab at our local office and in the process removed a couple of virtual networks from the Hyper-V environment. Even though I wrote scripts to update the majority of our VMs there were a couple of exceptions I needed to handle manually.

One was a template for an OS we never use so I didn’t need it hooked up to any network at all but whenever I tried to save the configuration I got an error from SCVMM:

This puzzled me as the VM wasn’t connected to any network at all:

What I needed to do was to set the network to a valid network:

And then, without applying or saving anything, disconnect the card:

After I’d done this I was able to update the VM without issues.