Archive for the ‘Elsewhere’ Category

A gathering of links, part 3   Leave a comment


Sorry for the lack of content. I have something I can write about, I think, but work is getting in the way.

For now, a gathering of links instead.

Cripes! I need to do these more often, this took me forever.

If you find these useful, please rate this blog post or leave a comment. There’s really no need for me to spend my morning doing this if no-one’s going to read it. 🙂

A gathering of links, part 2   Leave a comment


I’m not sure that I should keep using the “part n” moniker when naming these posts but for now that’s the best I got. We’ll see what happens further down the road.

During my vacation and the couple of weeks I’ve been working I’ve collected quite a few interesting links:

  • Steven Ekren describes in detail a new feature in Hyper-V 2012 (that I’ve missed) that Live Migrates a VM if a critical VM network fails.
  • Jose Barreto explains how to manage the new SMB3 features in Windows Server 2012 R2 through PowerShell.
  • Thomas Maurer has a step-by-step post on how to set up the new network virtualization gateway in Hyper-V 2012 R2.
  • Ben Armstrong details how to use Powershell in order to keep your Hyper-V replicas up to date with the source VM.
  • If you’re interested in Dell’s new PowerEdge VRTX cluster in a box, check out this article on the Microsoft Storage team blog.
  • Over at Hyper-V.nu Marc van Eijk has a really interesting article series on how to set up Hyper-V hosts using bare metal deployment in Windows Server 2012 R2 and System Center VMM 2012 R2. So far part 1 and part 2 have been posted.
  • Didier Van Hoye takes a thorough look at RDMA and not only talks the talk, he walks the walk as well. Funny and informative, as always.
  • vNiklas rescues missing VMs after a storage migration job went haywire. Strange enough he didn’t use one line of Powershell though. 🙂
  • Ben Armstrong shows how to import VMs to a new Hyper-V server without encountering issues due to incompatibilities by using some clever Powershell.
  • If you still need more RDMA check out Thomas Maurers post on the subject.
  • If you want to use Powershell in order to copy files to a VM using the new RDP over SMBus functionality in 2012 R2, vNiklas got you covered.
  • Another post from Thomas Maurer, this time he explains the features of the Cisco UCS Manager version 1.0.1 add-in for VMM and how to install it.
  • Finally, a post about CSV cache from Elden Christensen over at the Failover Clustering and Network Load Balancing Team Blog.

Posted 13 September, 2013 by martinnr5 in A gathering of links, Documentation, Elsewhere, FYI, Technical

Tagged with , , , , , ,

DPM storage calculations   Leave a comment


I’m posting this as I couldn’t find a good single point of reference for how to calculate how much storage your DPM implementation might need.

First off; here are the formulas that DPM uses to calculate default storage space if you want to do the math yourself. Sometimes this is the fastest way if you only need a rough estimate for a simple workload. These don’t take data growth into consideration though.

If you need more complex calculations there are a couple of calculators for DPM:

As mentioned above none of these are for DPM 2012 but if all you need is an accurate estimate of how much storage a DPM implementation will use they’ll do just fine.

Something else worth mentioning is that they don’t take into consideration the new limits of Hyper-V in Windows Server 2012 but if you need to protect a cluster larger than 16 nodes you’d probably want to do the math on your own just to be sure anyhow. 🙂

The first calculator is the most detailed but only covers Exchange up to version 2007. I never use this my self.

The DPM Volume Sizing Tool is actually a set of scripts and Excel sheets that you use to gather actual data from your environment if you want to, along with a couple of Word documents on how to get the ball rolling.

The latest version of the stand alone calculators for DPM 2010 are more detailed than the DPM Volume Sizing Tool but the Exchange calculator is not as detailed as the older one for Exchange and DPM 2007. In addition, these only cover a few of the workloads that DPM can protect.

Personally I do the math myself and if I need to use a calculator I manually enter values into the Excel calculator from the DPM Volume Sizing Tool as this calculator both handles all workloads that DPM can protect and also gives me a good summary of the storage needed.

It’d be nice to see Microsoft develop a single Excel calculator for all workloads and DPM 2012 but that doesn’t seem likely so we’ll make do with what we got.

Posted 12 September, 2013 by martinnr5 in Documentation, Elsewhere, Technical, Tools

Tagged with , ,

TechEd Europe 2013 – The rest   Leave a comment


Here’s the rest of the stuff I picked up at TechEd Europe that I felt that I couldn’t fit into any of the other posts. This will not be arranged in any particular order so apologies in advance for that. Also, this isn’t a very long post as most of the important stuff is included in my other TechEd posts.

One thing I’m really interested in is using the cloud – which in my case means Azure – to host your test and QA environments. I have one customer in particular that could really benefit from this as they A) have a huge amount of test and QA server in their own (static) data center which equates to a lot of wasted resources and B) haven’t got enough test and QA environments, resulting in the dreaded “test in production” syndrome.

This customer is quite large though and as I mentioned their data center is anything but dynamic so introducing this model is going to be a huge uphill struggle, both from an economic standpoint as well as a political one.

It is very interesting though so I’ll work on a small and simple proposal to test the waters and see what their reaction is.

Speaking of Azure, the new Windows Azure Pack for your private cloud is another interesting subject but I haven’t had the time to read up on it. I just wanted to mention that one idea that Microsoft presented was to use it as a portal for VMM but when I questioned them why this was a better idea than using App Controller or Service Manager I couldn’t really get a straight answer from them.

As I said though, an interesting subject and I’ll try to get back to it later on.

Finally a short note on the new quorum model in Windows Server 2012 R2. The model is very simple now and is just a vote majority where both nodes and the witness disk gets a vote. The dynamic quorum model automatically calculates what gets a vote though so the best practice from Microsoft is to always add a witness disk and then let the quorum decide if the disk should have a vote or not.

The same dynamic also adjusts for failures in nodes or the witness disk so that a cluster can withstand a lot more abuse now. When talking to Ben Armstrong about a customer that might scale up to roughly 30 hosts he mentioned that one subject that Microsoft needs to communicate better is the fact that large clusters aren’t a problem, especially not in 2012 R2.

And that wraps up my TechEd Europe 2013 notes. I probably missed something or talked about something more than once but so be it. If you should have any questions, feel free to ask away in the comments and I’ll do my best to answer them.

Thanks for reading.

A gathering of links, part 1   Leave a comment


As I’ve mentioned previously, I’m not a big fan of regurgitating content from other blogs as I gather that you already subscribe to posts by Microsoft in particular but also posts on other blogs of note.

Still, this blog is one way of providing my colleagues with ongoing information about Hyper-V, Windows Server and System Center (or at least those components I find interesting) which means that I from time to time will post a round-up of links that I’ve collected.

There will be no real order or grouping to these links, at least not for now.

All hosts receive a zero rating when you try to place the VM on a Hyper-V cluster in Virtual Machine Manager – If you have a setup where you use VLAN isolation on one logical network and no isolation on another you could happen upon this known issue in VMM 2012 SP1.

Ben Armstrong posts about a nifty little trick allowing you to set your VMs resolution to something else than the standard ones. Useful when you run client Hyper-V.

Neela Syam Kolli, Program Manager for DPM, posted a really interesting article called “How to plan for protecting VMs in private cloud deployments?” There’s quite a bit of useful information in this post but also some solid advice on how to plan your VM to CSV ratio, a question that I’ve been asked by customers and colleagues alike so let me go into a bit more detail on this.

From the article (my emphasis):

Whenever a snapshot is taken on a VM, snapshot is triggered on the whole CSV volume. As you may know, snapshot means that the data content at that point in time are preserved till the lifetime of the snapshot. DPM keeps snapshot on the volume on PS till the backup data transfer is complete. Once the backup data transfer is done, DPM removes that snapshot. In order to keep the content alive, any subsequent writes to that snapshot will cause volsnap to read old content, write to a new location and write the new location to the target location. For ex., if block 10 is being written to a volume where there is a live snapshot, VolSnap will read current block 10 and write it to a new location and write new content to the 10th block. This means that as long as the snapshot is active, each write will lead to one read and two writes. This kind of operation is called Copy On Write (COW). Even though the snapshot is taken on a VM, actual snapshot is happening on whole volume. So, all VMs that are residing in that CSV will have the IO penalty due to COW. So, it is advisable to have as less number of CSVs as possible to reduce the impact of backup of this VM on other VMs.  Also, as a VM snapshot will include snapshot across all CSVs that has this VM’s VHDs, the less the number of CSVs used for a VM the better in terms of backup performance.

Despite the language being a bit on the rough side the way I interpret this is that it’s preferable to have as few VMs as possible per CSV due to the COW impact on all VHDs on a CSV. Additionally, keep all your VHDs for a VM on the same CSV as all CSVs that host a VHD for the VM you’re protecting will suffer the COW performance hit.

The response I’ve gotten from Microsoft and the community regarding number of VMs per CSV is “as many as possible until your storage can’t handle the load” which is perfectly logical but also very vague. If you use DPM to protect your VMs you now have a bit more to lean on when sizing the environment. There is of course a reason that we use CSV and not a 1:1 ratio of volumes to VHDs so don’t go overboard with this recommendation.

Moving on.

This one is pretty hardcore but interesting none the less; How to live debug a VM in Hyper-V. If nothing else, it shows that there’s still a use for COM-ports in a VM.

vNiklas is as always blogging relentlessly about how to use PowerShell for pretty much everything in life. This one is about how to re-size a VHDX while it’s still online in the VM.

If you want to know more about how NIC teaming in Windows Server 2012 works then Didier van Hoye takes a look at how Live Migration uses various configurations of the NIC teaming feature. A very informative post with a surprising result!

Another post from Ben Armstrong, this time he does some troubleshooting when his VMs hover longer than expected at 2 % when starting up. Turns out that it’s due to a name resolution issue.

Microsoft and Cisco have produced a white paper on the new Cisco N1000V Hyper-V switch extension. It’s not very technical, although informative if you need to know a little bit more about how the N100V and the Hyper-V switch relate to each other.

The Hyper-V Management Pack Extensions 2012 for System Center Operations Manager 2012/2012 SP1 gives you even more options when monitoring Hyper-V with SCOM.

I mentioned that the post about how to live debug a VM was pretty hardcore but I’d like to revise that statement. Compared to this document called “Hypervisor Top-Level Functional Specification 3.0a: Windows Server 2012” that post is as hardcore as non-fat milk. Let me quote a select piece of text from the specification:

The guest reads CPUID leaf 0x40000000 to determine the maximum hypervisor CPUID leaf (returned in register EAX) and CPUID leaf 0x40000001 to determine the interface signature (returned in register EAX). It verifies that the maximum leaf value is at least 0x40000005 and that the interface signature is equal to “Hv#1”. This signature implies that HV_X64_MSR_GUEST_OS_ID, HV_X64_MSR_HYPERCALL and HV_X64_MSR_VP_INDEX are implemented.

My thoughts exactly! And the specification is 418 pages long so it should last you all through your vacation.

Finally (as I’m out of time and starting to get to stuff that’s a bit old), Peter Noorderijk writes about problems using Broadcom or Emulex 10 Gbit NICs. They resolved the issue by adding Intel NICs but another workaround is to turn off checksum offloading. Updated Emulex and Broadcom drives should be expected.

TechEd Europe 2013 – Virtual Networking   Leave a comment


This post is going to be the toughest one to write as virtual networking, called Hyper-V Networking by Microsoft,  is something that I haven’t had a lot of time to work with. It’s clear that Microsoft are heavily invested in Software Defined Networking (SDN) as quite a bit of work has been done in the R2 releases of Windows Server 2012 and System Center VMM 2012.

An overall theme to Microsoft’s work with SDN (and 2012 R2 in general, for that matter) is that of the three clouds; the Private Cloud, the Service Provider Cloud and Azure. They all need to work in harmony, requiring as little effort from the customer as possible to get up and running.

If we start with Windows Server 2012 R2 there’s a new teaming mode called “Dynamic” that chops up the network flow into what Microsoft calls flowlets in order to be able to distribute the load of one continuous flow over multiple team members.

Hyper-V Networking (HNV) is now supported on a network team.

NVGRE, the technique used in Windows Server to implement network virtualization, can be offloaded to network cards with this feature. VMQ in combination with NVGRE will only work on NICs with the NVGRE task offload capability.

HNV is now a more integral part of a Hyper-V virtual switch which allows third-party extensions to see traffic flow from both consumer addresses (CA) as well as provider addresses (PA). HNV now also dynamically learns about new addresses in CA which means that 2012 R2 supports both DHCP servers as well as guest clusters on a virtualized network.

On that note it’s worth mentioning that the basic version of Cisco’s extension of the Hyper-V switch, called 1000V, is completely free to download and use – essentially turning your Hyper-V switch into a Cisco switch.

The Hyper-V switch has improved ACL and can now be set on port as well as on IP. In addition to this it’s also stateful.

A virtual machine can now use RSS inside the actual VM. This is called vRSS.

Microsoft was keen to point out that there had been a lot of work done in order to make SDN not only perform better but als easier to troubleshoot. There’s a new Powershell cmdlet called TestNetConnection that is ping, traceroute and nslookup all rolled into one and 2012 R2 allows you to ping a PA by using “ping -p”

In addition to this there’s a new Powershell cmdlet that allows you to test connectivity between VMs on the CA space.

2012 R2 boasts a replacement for the tried and true NetMon, called Message Analyzer. This new tool allows you, among a lot of other things, to in near real time monitor network traffic on a remote host.

There’s a built-in multi-tenant gateway between CA and PA in 2012 R2 that should be able to scale up to 100 tenants. IPAM in Windows Server 2012 R2 can now handle guest networks and can be integrated with VMM 2012 R2.

Continuing on with VMM 2012 R2 there is now support for managing top-of-rack switches from within VMM through the OMI standard. So far only Arista has announced support for this but other hardware vendors should follow. Look for the Microsoft certification logo.

This allows for a number of interesting features in VMM, one that was demonstrated was the ability to inspect the configuration of a switch and if needed remedy it from within VMM.

I’m not sure how many of my customers that can use all these features, and SDN in particular, but hopefully I’ll be able to experiment with this in my lab sometime this autumn in order to get some more experience with it.

TechEd Europe 2013 – Storage   Leave a comment


Microsoft on storageOnwards and upwards (or, if using a logical depiction of infrastructure, downwards) to storage.

It should be obvious by now that Microsoft is very serious about storage and they’re using the term Software Defined Storage throughout their sessions. Windows Server 2012 introduced a number of great storage features and 2012 R2 expands on them.

The main point driven home by the sessions, and pretty much everything else Microsoft communicates, is that they want to enable you to build scalable enterprise solutions based on standardized, commodity hardware. The image to the right, taken from these slides, illustrate this vision.

Unlike the Hyper-V post I’m not quite sure on how to divide this post into sections so I’ll just rattle through the features to the best of my abilities.

The fact that Windows Server and System Center now has a synchronized release schedule means that VMM 2012 R2 is able to to a lot more when it comes to storage.

One of the bigger items is that it now can manage the entire Fibre Channel stack, from virtual HBAs in a VM to configuring a Cisco or Brocade SAN switch.

VMM 2012 R2 and Windows Server 2012 R2 uses a new management API called SM-API that is not only a lot faster but covers SMI-S, Storage Spaces as well as older devices. This means that VMM 2012 R2 also manages the entire Storage Spaces stack instead of just simple management of shares as in VMM 2012 SP1.

VMM 2102 R2 uses ODX, if possible, to deploy a VM from library to a host but not for anything else (we’ll see what happens until the product is released though).

VMM 2012 can bare metal deploy Hyper-V hosts and that functionality is extended to Scale Out File Servers now as well in 2012 R2. It sets up the entire thing for you, totally automated.

In VMM 2012 R2 you can classify storage on a per share level if you wish. A volume can be Silver, for instance, but a share can have additional features that increases the classification to Gold. These classifications are user configurable. Beyond this the actual fabric can now also be classified.

As mentioned in my previous post Windows Server 2012 R2 allows you to set a per VM Quality of Service for IOPS and the VM also has a new set of metrics (that follows the VM around) that should make it a lot easier to design a solution based on facts instead of more or less qualified guesses.

Also mentioned previously is the shared VHDX (I’ll abbreviate that to sVHDX from now on) but I’d like to expand a bit on the feature. As with previous guest clustering methods you can’t back up a guest cluster using host level backup, even with sVHDX.

Something else that doesn’t work with sVHDX is Hyper-V Replica and neither does Live Storage Migration. When I mentioned this to Ben Armstrong it was quite clear that Microsoft is very well aware of these limitations. Reading between the lines; working as hard as possible to remove them.

VMM can only create a sVHDX by using templates but the Hyper-V manager exposes this functionality as well if needed.

A VM can have a Write Back Cache in 2012 R2, also persistent with the machine. Differencing disks are also cached, leading to much faster deployment of VMs. The CSV cache can now be up to 80% of the RAM.

And, just to include it, you can now expand a VHDX online.

On to the actual file server in Windows Server 2012 R2.

A lot of work has been put into enhancing performance and demos showed 1.000.000+ IOPS with randomly read 8 KB packets. Even with 32 KB packets 2012 R2 delivers 500.000+ IOPS but, most importantly, 16+ GB/second of data transfer. Note that this is with Storage Spaces on standard hardware. SMB Direct has seen a performance boost overall – especially over networks faster than 10 Gbit.

De-duplication has been improved as well and now supports CSV volumes and live de-dupe for VHDX in a VDI scenario. Counter intuitive to what you might think the VMs actually boot faster in this scenario, thanks to efficient caching. If your Hyper-V host is directly attached to the storage you should never activate de-dupe though. Save that CPU for your VMs. Also, don’t enable de-dupe of VHDX for anything else than VDI scenarios if you want support from Microsoft.

The de-duplication is handled by the node that owns the CSV-volume where the VHDX resides and demos showed that is possible to put 50 VDI VMs on a low-cost, commodity SSD, giving them great boot performance.

One feature that I really like is the automatic tiering in Storage Spaces between SSD disks and mechanical disks (no other distinction is made) that makes sure that hot bits are migrated to SSD according to schedule, or manually if you so wish. Killer feature.

2012 R2 includes a SMB Bandwidth Manager that differentiates between traffic for Live Migration, VM storage and everything else. Similar to existing QoS but for SMB.

A Scale Out File Server cluster in 2012 R2 automatically balances the ownership and access of both CSV volumes as well as shares. This in conjunction with the fact that clients now connect to a share, instead of a host, means that a guest can leverage the SOFS cluster capacity much more efficiently.

There’s a new instance in a 2012 R2 SOFS cluster that is dedicated to managing CSV traffic, improving reliability of the cluster.

If you install the iSCSI role on a 2012 R2 server you get the SMI-S provider for iSCSI as well, instead of having to install it from VMM media as it is now.

When chatting briefly with Hans Vredevoort he mentioned that NetApp has a feature that converts VMDK (VMware hard drives) to VHDX in 30 seconds by simply changing the metadata for the disk, leaving the actual data alone. Sounds amazing and I’ll try to get some more information on this.

Finally, on a parting note, I’d like to mention that when I asked José Barreto if it’d be worth the effort to convince a colleague that works solely with traditional enterprise storage to come to TechEd 2014 he thought for a while and then said that yes, it’d be worth the effort.

To echo my opening statement; if should be obvious by now that Microsoft is serious about owning the storage layer as well and based on the hint that José gave I’m sure that if nothing else next year will be even more interesting.

For more on SMB3 and Windows Server 2012 R2 storage, visit José Barretos post on the subject.

Posted 30 June, 2013 by martinnr5 in Elsewhere, FYI, Operating system, Technical

Tagged with , , ,

A small mention about the Windows Server 2012 R2/Windows 8.1 Preview   Leave a comment


Most likely you’ve all seen this post by Hans Vredevoort on how to create a bootable VHDX in order to test WS 2012 R2/Windows 8.1 Preview without jeopardizing your current installation.

If not, please read it – it’s the most painless method by far.

Another small note; if you are using another base language on your RT device than the officially supported 13 languages you will get an error when trying to install the 8.1 Preview. Please see this post for more info.

TechEd Europe 2013, Prologue   Leave a comment


So, TechEd Europe 2013. My first techEd ever I might add so my recap of the week is not going to be similar to what other blogs might post. I’m pretty sure it’d be different even if this would’ve been my 10th TechEd, but I digress.

Tomorrow I’ll try to go into a little more detail of the sessions I’ve attended and the talks I’ve had with people as I right now just want to go grab a bite to eat and after that take a bath, finish the book I’m reading and then an early night (far too many late nights, as it were).

Going into TEE 2013 I wasn’t sure what to expect and right now, immediately after it’s over, I’m having mixed feelings about the event. Without a doubt these will coalesce into something else over time, perhaps by tomorrow already, so take them with a grain of salt.

The event itself was very well-organized, as should be expected, and IFEMA is a great conference hall (although their toilets make an awful racket when being flushed – seriously, prolonged exposure will give you impaired hearing). While on the subject of toilets, it was very disheartening as well as disgusting to see delegates leave a toilet stall without washing their hands.

Listen, I don’t care what you do at home but in public I expect you to behave as an adult.

The sessions are what gives me mixed feelings as most of them were quite good, some were great and some not all that impressive. I guess I was hoping for the level 300 and 400 sessions to be really technical but it turns out that I got the most useful and interesting information when talking to Microsoft, their partners and other delegates.

Talking to Microsoft, and especially to the “rock stars” like Ben Armstrong, José Barreto, Mark Russinovich, Jeffrey Woolsey and their ilks, wasn’t all that easy though as a lot of delegates didn’t care that there were others besides them that wanted to get a chance to ask questions and instead kept flapping their gums forever.

I understand why this happens, I really do, and I was probably guilty of it myself when chatting with Ben Armstrong during the final minutes of Ask the Experts but to my defense the guy who wanted to chime in was sitting behind me, quiet as a mouse.

Still, annoying as hell having to rush off to the next session having waited for the guy in front of me to stop yapping to no avail.

Would I recommend TEE to my colleagues and customers and others living in Europe? Absolutely. I know the “big one” is in North America but there are a number of pros with going to TEE instead.

First of all; it’s a much shorter (not to mention cheaper) trip. You can spend that time doing something a lot more inspiring than sitting on a plane. Second; all the big names are at TEE as well. There were a couple of speakers that were missing, perhaps because Build was this week as well, but as I mentioned above – all the headlining names were present and accounted for.

Last, but not least; as TechEd North America is held before TEE all the kinks in the keynotes, sessions, hands-on labs, demos, etc. have a chance to be worked out in time for TEE. Because of the scheduling you also have the chance to look through the comments and feedback to the material posted on Channel 9 and even check out some of the recorded sessions if you want to make sure that you attend the right ones.

Ok, enough of this – I’m off to find me some tapas and a beer or two. I’ll be spending the weekend i Madrid but it’d be a shame to let this lovely weather go to waste.

Hasta luego!

Posted 28 June, 2013 by martinnr5 in Elsewhere, Opinion

Tagged with ,

A short notice on TechEd   Leave a comment


I’ll try and post something in depth about TechEd tomorrow, there haven’t been time to do it so far.

Posted 26 June, 2013 by martinnr5 in Elsewhere, FYI

Tagged with