Archive for the ‘Operating system’ Category

A short note about AD detached clusters in Windows Server 2012 R2   Leave a comment


Yesterday Microsoft posted an article about a new way to deploy clusters in Windows Server 2012 R2 called Active Directory Detached Clusters. As the name implies this type of cluster do not rely on your AD in order to operate, instead using DNS for the Computer Name Objects and the Virtual Computer Objects.

This is great news as I’ve had several clusters acting up due to the domain controller not being reachable but there is one important caveat with this mode:

The intra-cluster communication would continue to use Kerberos for authentication, however, the authentication of the CNO would be done using NT LM authentication. Thus, you need to remember that for all Cluster roles that need Kerberos Authentication use of AD-detached cluster is not recommended.

This means that Live Migration isn’t supported for a Hyper-V cluster, only Quick Migration.

More information here.

Posted 25 March, 2014 by martinnr5 in Documentation, Operating system, Technical

Tagged with , ,

A gathering of links, part 3   Leave a comment


Sorry for the lack of content. I have something I can write about, I think, but work is getting in the way.

For now, a gathering of links instead.

Cripes! I need to do these more often, this took me forever.

If you find these useful, please rate this blog post or leave a comment. There’s really no need for me to spend my morning doing this if no-one’s going to read it. 🙂

TechEd Europe 2013 – The rest   Leave a comment


Here’s the rest of the stuff I picked up at TechEd Europe that I felt that I couldn’t fit into any of the other posts. This will not be arranged in any particular order so apologies in advance for that. Also, this isn’t a very long post as most of the important stuff is included in my other TechEd posts.

One thing I’m really interested in is using the cloud – which in my case means Azure – to host your test and QA environments. I have one customer in particular that could really benefit from this as they A) have a huge amount of test and QA server in their own (static) data center which equates to a lot of wasted resources and B) haven’t got enough test and QA environments, resulting in the dreaded “test in production” syndrome.

This customer is quite large though and as I mentioned their data center is anything but dynamic so introducing this model is going to be a huge uphill struggle, both from an economic standpoint as well as a political one.

It is very interesting though so I’ll work on a small and simple proposal to test the waters and see what their reaction is.

Speaking of Azure, the new Windows Azure Pack for your private cloud is another interesting subject but I haven’t had the time to read up on it. I just wanted to mention that one idea that Microsoft presented was to use it as a portal for VMM but when I questioned them why this was a better idea than using App Controller or Service Manager I couldn’t really get a straight answer from them.

As I said though, an interesting subject and I’ll try to get back to it later on.

Finally a short note on the new quorum model in Windows Server 2012 R2. The model is very simple now and is just a vote majority where both nodes and the witness disk gets a vote. The dynamic quorum model automatically calculates what gets a vote though so the best practice from Microsoft is to always add a witness disk and then let the quorum decide if the disk should have a vote or not.

The same dynamic also adjusts for failures in nodes or the witness disk so that a cluster can withstand a lot more abuse now. When talking to Ben Armstrong about a customer that might scale up to roughly 30 hosts he mentioned that one subject that Microsoft needs to communicate better is the fact that large clusters aren’t a problem, especially not in 2012 R2.

And that wraps up my TechEd Europe 2013 notes. I probably missed something or talked about something more than once but so be it. If you should have any questions, feel free to ask away in the comments and I’ll do my best to answer them.

Thanks for reading.

A gathering of links, part 1   Leave a comment


As I’ve mentioned previously, I’m not a big fan of regurgitating content from other blogs as I gather that you already subscribe to posts by Microsoft in particular but also posts on other blogs of note.

Still, this blog is one way of providing my colleagues with ongoing information about Hyper-V, Windows Server and System Center (or at least those components I find interesting) which means that I from time to time will post a round-up of links that I’ve collected.

There will be no real order or grouping to these links, at least not for now.

All hosts receive a zero rating when you try to place the VM on a Hyper-V cluster in Virtual Machine Manager – If you have a setup where you use VLAN isolation on one logical network and no isolation on another you could happen upon this known issue in VMM 2012 SP1.

Ben Armstrong posts about a nifty little trick allowing you to set your VMs resolution to something else than the standard ones. Useful when you run client Hyper-V.

Neela Syam Kolli, Program Manager for DPM, posted a really interesting article called “How to plan for protecting VMs in private cloud deployments?” There’s quite a bit of useful information in this post but also some solid advice on how to plan your VM to CSV ratio, a question that I’ve been asked by customers and colleagues alike so let me go into a bit more detail on this.

From the article (my emphasis):

Whenever a snapshot is taken on a VM, snapshot is triggered on the whole CSV volume. As you may know, snapshot means that the data content at that point in time are preserved till the lifetime of the snapshot. DPM keeps snapshot on the volume on PS till the backup data transfer is complete. Once the backup data transfer is done, DPM removes that snapshot. In order to keep the content alive, any subsequent writes to that snapshot will cause volsnap to read old content, write to a new location and write the new location to the target location. For ex., if block 10 is being written to a volume where there is a live snapshot, VolSnap will read current block 10 and write it to a new location and write new content to the 10th block. This means that as long as the snapshot is active, each write will lead to one read and two writes. This kind of operation is called Copy On Write (COW). Even though the snapshot is taken on a VM, actual snapshot is happening on whole volume. So, all VMs that are residing in that CSV will have the IO penalty due to COW. So, it is advisable to have as less number of CSVs as possible to reduce the impact of backup of this VM on other VMs.  Also, as a VM snapshot will include snapshot across all CSVs that has this VM’s VHDs, the less the number of CSVs used for a VM the better in terms of backup performance.

Despite the language being a bit on the rough side the way I interpret this is that it’s preferable to have as few VMs as possible per CSV due to the COW impact on all VHDs on a CSV. Additionally, keep all your VHDs for a VM on the same CSV as all CSVs that host a VHD for the VM you’re protecting will suffer the COW performance hit.

The response I’ve gotten from Microsoft and the community regarding number of VMs per CSV is “as many as possible until your storage can’t handle the load” which is perfectly logical but also very vague. If you use DPM to protect your VMs you now have a bit more to lean on when sizing the environment. There is of course a reason that we use CSV and not a 1:1 ratio of volumes to VHDs so don’t go overboard with this recommendation.

Moving on.

This one is pretty hardcore but interesting none the less; How to live debug a VM in Hyper-V. If nothing else, it shows that there’s still a use for COM-ports in a VM.

vNiklas is as always blogging relentlessly about how to use PowerShell for pretty much everything in life. This one is about how to re-size a VHDX while it’s still online in the VM.

If you want to know more about how NIC teaming in Windows Server 2012 works then Didier van Hoye takes a look at how Live Migration uses various configurations of the NIC teaming feature. A very informative post with a surprising result!

Another post from Ben Armstrong, this time he does some troubleshooting when his VMs hover longer than expected at 2 % when starting up. Turns out that it’s due to a name resolution issue.

Microsoft and Cisco have produced a white paper on the new Cisco N1000V Hyper-V switch extension. It’s not very technical, although informative if you need to know a little bit more about how the N100V and the Hyper-V switch relate to each other.

The Hyper-V Management Pack Extensions 2012 for System Center Operations Manager 2012/2012 SP1 gives you even more options when monitoring Hyper-V with SCOM.

I mentioned that the post about how to live debug a VM was pretty hardcore but I’d like to revise that statement. Compared to this document called “Hypervisor Top-Level Functional Specification 3.0a: Windows Server 2012” that post is as hardcore as non-fat milk. Let me quote a select piece of text from the specification:

The guest reads CPUID leaf 0x40000000 to determine the maximum hypervisor CPUID leaf (returned in register EAX) and CPUID leaf 0x40000001 to determine the interface signature (returned in register EAX). It verifies that the maximum leaf value is at least 0x40000005 and that the interface signature is equal to “Hv#1”. This signature implies that HV_X64_MSR_GUEST_OS_ID, HV_X64_MSR_HYPERCALL and HV_X64_MSR_VP_INDEX are implemented.

My thoughts exactly! And the specification is 418 pages long so it should last you all through your vacation.

Finally (as I’m out of time and starting to get to stuff that’s a bit old), Peter Noorderijk writes about problems using Broadcom or Emulex 10 Gbit NICs. They resolved the issue by adding Intel NICs but another workaround is to turn off checksum offloading. Updated Emulex and Broadcom drives should be expected.

TechEd Europe 2013 – Virtual Networking   Leave a comment


This post is going to be the toughest one to write as virtual networking, called Hyper-V Networking by Microsoft,  is something that I haven’t had a lot of time to work with. It’s clear that Microsoft are heavily invested in Software Defined Networking (SDN) as quite a bit of work has been done in the R2 releases of Windows Server 2012 and System Center VMM 2012.

An overall theme to Microsoft’s work with SDN (and 2012 R2 in general, for that matter) is that of the three clouds; the Private Cloud, the Service Provider Cloud and Azure. They all need to work in harmony, requiring as little effort from the customer as possible to get up and running.

If we start with Windows Server 2012 R2 there’s a new teaming mode called “Dynamic” that chops up the network flow into what Microsoft calls flowlets in order to be able to distribute the load of one continuous flow over multiple team members.

Hyper-V Networking (HNV) is now supported on a network team.

NVGRE, the technique used in Windows Server to implement network virtualization, can be offloaded to network cards with this feature. VMQ in combination with NVGRE will only work on NICs with the NVGRE task offload capability.

HNV is now a more integral part of a Hyper-V virtual switch which allows third-party extensions to see traffic flow from both consumer addresses (CA) as well as provider addresses (PA). HNV now also dynamically learns about new addresses in CA which means that 2012 R2 supports both DHCP servers as well as guest clusters on a virtualized network.

On that note it’s worth mentioning that the basic version of Cisco’s extension of the Hyper-V switch, called 1000V, is completely free to download and use – essentially turning your Hyper-V switch into a Cisco switch.

The Hyper-V switch has improved ACL and can now be set on port as well as on IP. In addition to this it’s also stateful.

A virtual machine can now use RSS inside the actual VM. This is called vRSS.

Microsoft was keen to point out that there had been a lot of work done in order to make SDN not only perform better but als easier to troubleshoot. There’s a new Powershell cmdlet called TestNetConnection that is ping, traceroute and nslookup all rolled into one and 2012 R2 allows you to ping a PA by using “ping -p”

In addition to this there’s a new Powershell cmdlet that allows you to test connectivity between VMs on the CA space.

2012 R2 boasts a replacement for the tried and true NetMon, called Message Analyzer. This new tool allows you, among a lot of other things, to in near real time monitor network traffic on a remote host.

There’s a built-in multi-tenant gateway between CA and PA in 2012 R2 that should be able to scale up to 100 tenants. IPAM in Windows Server 2012 R2 can now handle guest networks and can be integrated with VMM 2012 R2.

Continuing on with VMM 2012 R2 there is now support for managing top-of-rack switches from within VMM through the OMI standard. So far only Arista has announced support for this but other hardware vendors should follow. Look for the Microsoft certification logo.

This allows for a number of interesting features in VMM, one that was demonstrated was the ability to inspect the configuration of a switch and if needed remedy it from within VMM.

I’m not sure how many of my customers that can use all these features, and SDN in particular, but hopefully I’ll be able to experiment with this in my lab sometime this autumn in order to get some more experience with it.

TechEd Europe 2013 – Storage   Leave a comment


Microsoft on storageOnwards and upwards (or, if using a logical depiction of infrastructure, downwards) to storage.

It should be obvious by now that Microsoft is very serious about storage and they’re using the term Software Defined Storage throughout their sessions. Windows Server 2012 introduced a number of great storage features and 2012 R2 expands on them.

The main point driven home by the sessions, and pretty much everything else Microsoft communicates, is that they want to enable you to build scalable enterprise solutions based on standardized, commodity hardware. The image to the right, taken from these slides, illustrate this vision.

Unlike the Hyper-V post I’m not quite sure on how to divide this post into sections so I’ll just rattle through the features to the best of my abilities.

The fact that Windows Server and System Center now has a synchronized release schedule means that VMM 2012 R2 is able to to a lot more when it comes to storage.

One of the bigger items is that it now can manage the entire Fibre Channel stack, from virtual HBAs in a VM to configuring a Cisco or Brocade SAN switch.

VMM 2012 R2 and Windows Server 2012 R2 uses a new management API called SM-API that is not only a lot faster but covers SMI-S, Storage Spaces as well as older devices. This means that VMM 2012 R2 also manages the entire Storage Spaces stack instead of just simple management of shares as in VMM 2012 SP1.

VMM 2102 R2 uses ODX, if possible, to deploy a VM from library to a host but not for anything else (we’ll see what happens until the product is released though).

VMM 2012 can bare metal deploy Hyper-V hosts and that functionality is extended to Scale Out File Servers now as well in 2012 R2. It sets up the entire thing for you, totally automated.

In VMM 2012 R2 you can classify storage on a per share level if you wish. A volume can be Silver, for instance, but a share can have additional features that increases the classification to Gold. These classifications are user configurable. Beyond this the actual fabric can now also be classified.

As mentioned in my previous post Windows Server 2012 R2 allows you to set a per VM Quality of Service for IOPS and the VM also has a new set of metrics (that follows the VM around) that should make it a lot easier to design a solution based on facts instead of more or less qualified guesses.

Also mentioned previously is the shared VHDX (I’ll abbreviate that to sVHDX from now on) but I’d like to expand a bit on the feature. As with previous guest clustering methods you can’t back up a guest cluster using host level backup, even with sVHDX.

Something else that doesn’t work with sVHDX is Hyper-V Replica and neither does Live Storage Migration. When I mentioned this to Ben Armstrong it was quite clear that Microsoft is very well aware of these limitations. Reading between the lines; working as hard as possible to remove them.

VMM can only create a sVHDX by using templates but the Hyper-V manager exposes this functionality as well if needed.

A VM can have a Write Back Cache in 2012 R2, also persistent with the machine. Differencing disks are also cached, leading to much faster deployment of VMs. The CSV cache can now be up to 80% of the RAM.

And, just to include it, you can now expand a VHDX online.

On to the actual file server in Windows Server 2012 R2.

A lot of work has been put into enhancing performance and demos showed 1.000.000+ IOPS with randomly read 8 KB packets. Even with 32 KB packets 2012 R2 delivers 500.000+ IOPS but, most importantly, 16+ GB/second of data transfer. Note that this is with Storage Spaces on standard hardware. SMB Direct has seen a performance boost overall – especially over networks faster than 10 Gbit.

De-duplication has been improved as well and now supports CSV volumes and live de-dupe for VHDX in a VDI scenario. Counter intuitive to what you might think the VMs actually boot faster in this scenario, thanks to efficient caching. If your Hyper-V host is directly attached to the storage you should never activate de-dupe though. Save that CPU for your VMs. Also, don’t enable de-dupe of VHDX for anything else than VDI scenarios if you want support from Microsoft.

The de-duplication is handled by the node that owns the CSV-volume where the VHDX resides and demos showed that is possible to put 50 VDI VMs on a low-cost, commodity SSD, giving them great boot performance.

One feature that I really like is the automatic tiering in Storage Spaces between SSD disks and mechanical disks (no other distinction is made) that makes sure that hot bits are migrated to SSD according to schedule, or manually if you so wish. Killer feature.

2012 R2 includes a SMB Bandwidth Manager that differentiates between traffic for Live Migration, VM storage and everything else. Similar to existing QoS but for SMB.

A Scale Out File Server cluster in 2012 R2 automatically balances the ownership and access of both CSV volumes as well as shares. This in conjunction with the fact that clients now connect to a share, instead of a host, means that a guest can leverage the SOFS cluster capacity much more efficiently.

There’s a new instance in a 2012 R2 SOFS cluster that is dedicated to managing CSV traffic, improving reliability of the cluster.

If you install the iSCSI role on a 2012 R2 server you get the SMI-S provider for iSCSI as well, instead of having to install it from VMM media as it is now.

When chatting briefly with Hans Vredevoort he mentioned that NetApp has a feature that converts VMDK (VMware hard drives) to VHDX in 30 seconds by simply changing the metadata for the disk, leaving the actual data alone. Sounds amazing and I’ll try to get some more information on this.

Finally, on a parting note, I’d like to mention that when I asked José Barreto if it’d be worth the effort to convince a colleague that works solely with traditional enterprise storage to come to TechEd 2014 he thought for a while and then said that yes, it’d be worth the effort.

To echo my opening statement; if should be obvious by now that Microsoft is serious about owning the storage layer as well and based on the hint that José gave I’m sure that if nothing else next year will be even more interesting.

For more on SMB3 and Windows Server 2012 R2 storage, visit José Barretos post on the subject.

Posted 30 June, 2013 by martinnr5 in Elsewhere, FYI, Operating system, Technical

Tagged with , , ,

TechEd Europe 2013 – Hyper-V   Leave a comment


I just counted and I have over 11 pages of handwritten notes1 from the sessions I went to so it’ll take me some time to compile them all into something coherent. This, in addition to the fact that my current theme of the blog doesn’t lend itself very well to long blog posts (though I normally try to go for quality over quantity), means that I’ll chunk the posts into a number of categories; Hyper-V, Networking and Storage as these are the main areas I focused on. Most likely I’ll end up with a “catch-all” post as well.

As the title of the post implies, let’s start with Hyper-V. Now, I know that there are numerous blog posts that cover what I’m about to cover but I’m summarizing the event for both colleagues and customers that weren’t able to attend so bear with me.

Hyper-V Replica

In 2012 R2 Hyper-V Replica has support for a third step in the replication process. The official name is Extended Hyper-V Replica and according to Ben Armstrong it was mainly service providers who asked for this to be implemented although I can see a number of my customers benefitting from this as well.

In order to manage large environments that implement Hyper-V Replica Microsoft developed Hyper-V Replica Manager (HRM), an Azure service that connects to your VMM servers and then provides Disaster Recovery protection through Hyper-V Replica on a VMM cloud level.

This requires a small agent to be installed on all VMM servers that are to be managed by the service. The VMM servers then configure the hosts, including adding the Replica functionality if needed (even the Replica Broker on clusters). After adding this agent you can add DR functionality in VM Templates in VMM.

Using HRM you can easily configure your DR protection and also orchestrate a failover including, among other things, the order you start-up your VMs and manual steps if needed. The service can be managed by your smart phone and there are no plans to allow organizations to deploy HRM internally.

Only the metadata used to coordinate the protection is ever communicated to the cloud, using certificates. All actual replication of VMs are strictly site to site, between your data centers.

If you manage to screw up your Hyper-V Replica configuration on the host level you need to manually sync with HRM to restore the settings. At least for now, R2 isn’t released yet so who knows what’ll change until then.

Finally; you are now allowed a bit more freedom when it comes to replication intervals; 30 seconds, 5 minutes and 15 minutes. Since there hasn’t been enough time to test, Microsoft won’t allow arbitrary replication intervals.

Linux

Linux guests now enjoy the same Dynamic Memory as Windows guests. Linux is now backed up using a file system freeze that gives a VSS alike functionality. Finally; the video driver when using VM connect to a Linux guest is new and vastly better than the old one.

I never got any information on what distros that’d be supported but my guess is that all guests with the “R2” integration components should be good to go.

Live Migration

One big thing in 2012 R2 is that Live Migration has seen some major performance improvements. By using compression (leveraging spare host CPU cycles) or SMB/RDMA Microsoft have seen consistent performance improvements of at least 40%, in some cases 200 to 300%.

The general rule of thumb is to activate compression for networks up to 10 Gbit and use SMB/RDMA on anything faster.

Speaking of Live Migration I’d like to mention a chat I had with Ben Armstrong about Live Storage Migration performance as I have a customer who sees issues with this. When doing a LSM of a VM you should expect 90% of the performance you get when doing a unbuffered copy to/from the same storage your VMs reside on. I might do a separate post on this just to elaborate.

Storage

In 2012 R2 you can now set Quality of Service for IOPS on a per VM level. Why no QoS for bandwidth? R2 isn’t finished so there’s still a possibility that it might show up.

One big feature, that should have been in 2012 RTM if you ask me (and numerous others), is that you can now expand a VHDX when the VM is online.

Another big feature (properly big this time) is guest clustering through a shared VHDX, in effect acting as virtual SAS storage inside your VM (using the latest, R2, integration services in your VM). More on this in my storage post though.

One more highly anticipated feature is active de-duplication of VHD/VHDX files when used in a VDI scenario. Why only VDI? Because Microsoft haven’t done enough testing. Feel free to de-dupe any VHDX you like but if things break and it’s not a VDI deployment, don’t call Microsoft. More on this in my storage post as well.

The rest

Ben Armstrong opened his session with the reflection that almost no customer is using all the features that Hyper-V 2012 offers. To me, that says a lot about the rich feature set of Hyper-V, a feature set that only gets richer in 2012 R2.

One really neat feature that shows how beneficial it is to own the entire ecosystem the way that Microsoft does is that all Windows Server 2012 R2 guests will automatically be activated if the host is running an activated data center edition of Windows Server 2012 R2. There are no plans to port this functionality to older OS, neither for guests nor for hosts.

In R2 you now get the full RDP experience, including USB redirection and audio/video, over SMBus. This means that you can connect to a VM and copy files, text, etc. even if you do not have network connectivity to the VM.

2012 R2 supports generation 2 VMs. Essentially a much slicker, simpler and more secure VM that has a couple of quirks. One being that only Windows Server 2012/2012 R2 and Windows 8/8.1 are supported as guest OSes as of now.

The seamless migration from 2012 to 2012 R2 is simply a Live Migration. The only other option is the Copy Cluster Roles wizard (new name in R2) which incurs downtime.

You can export or clone a VM while it’s running in 2012 R2.

Ben was asked the question if there’d ever be the option to hot-add a vCPU to a VM and the reply was that there really is no need for that feature. The reason being that Hyper-V has a very small penalty for having multiple vCPUs assigned to a VM. This is different from the best practices given by VMware where additional vCPUs does incur a penalty if they’re not used. The take-away is that you should alwasy deploy Hyper-V VMs with more than one vCPU.

During the “Meet the Experts” session I had the chance to sit down with Ben Armstrong and wax philosophically about Hyper-V. When I asked him about the continued innovation of Hyper-V he said that there’s another decades worth of features to implement. Makes me feel all warm inside.

Conclusion

As there haven’t been a lot of time between Windows Server 2012 and the upcoming release of 2012 R2 (roughly a year) the new or improved features might not be all that impressive when it comes to Hyper-V but I honestly think that they’re very useful and I already have a number of customer scenarios in mind where they’ll be of great use.

Not to mention that Hyper-V is only a small slice of the pie of new features in R2. I’ll be writing about some of the other slices tomorrow.

Most importantly though, and this was stressed by a fair number of speakers, is that Windows Server and System Center finally, for the first time ever, have a properly synchronized release schedule, allowing Microsoft to deliver on the promise of a fully functional solution for Private Clouds.

I agree with this notion as it’s been quite frustrating have to wait for System Center to play catch up with the OS, or vice versa.

With that, I bid you adieu. See you tomorrow.

1. Yeah, I prefer taking notes by hand as it allows me to be a lot more flexible, not to mention that I’m faster this way. I took advantage of the offer at this TechEd and bought both the Surface RT and Pro though so who knows, the next time I might use one of those devices.
%d bloggers like this: