Archive for the ‘TechEd’ Tag

TechEd Europe 2013 – The rest   Leave a comment

Here’s the rest of the stuff I picked up at TechEd Europe that I felt that I couldn’t fit into any of the other posts. This will not be arranged in any particular order so apologies in advance for that. Also, this isn’t a very long post as most of the important stuff is included in my other TechEd posts.

One thing I’m really interested in is using the cloud – which in my case means Azure – to host your test and QA environments. I have one customer in particular that could really benefit from this as they A) have a huge amount of test and QA server in their own (static) data center which equates to a lot of wasted resources and B) haven’t got enough test and QA environments, resulting in the dreaded “test in production” syndrome.

This customer is quite large though and as I mentioned their data center is anything but dynamic so introducing this model is going to be a huge uphill struggle, both from an economic standpoint as well as a political one.

It is very interesting though so I’ll work on a small and simple proposal to test the waters and see what their reaction is.

Speaking of Azure, the new Windows Azure Pack for your private cloud is another interesting subject but I haven’t had the time to read up on it. I just wanted to mention that one idea that Microsoft presented was to use it as a portal for VMM but when I questioned them why this was a better idea than using App Controller or Service Manager I couldn’t really get a straight answer from them.

As I said though, an interesting subject and I’ll try to get back to it later on.

Finally a short note on the new quorum model in Windows Server 2012 R2. The model is very simple now and is just a vote majority where both nodes and the witness disk gets a vote. The dynamic quorum model automatically calculates what gets a vote though so the best practice from Microsoft is to always add a witness disk and then let the quorum decide if the disk should have a vote or not.

The same dynamic also adjusts for failures in nodes or the witness disk so that a cluster can withstand a lot more abuse now. When talking to Ben Armstrong about a customer that might scale up to roughly 30 hosts he mentioned that one subject that Microsoft needs to communicate better is the fact that large clusters aren’t a problem, especially not in 2012 R2.

And that wraps up my TechEd Europe 2013 notes. I probably missed something or talked about something more than once but so be it. If you should have any questions, feel free to ask away in the comments and I’ll do my best to answer them.

Thanks for reading.

TechEd Europe 2013 – Virtual Networking   Leave a comment

This post is going to be the toughest one to write as virtual networking, called Hyper-V Networking by Microsoft,  is something that I haven’t had a lot of time to work with. It’s clear that Microsoft are heavily invested in Software Defined Networking (SDN) as quite a bit of work has been done in the R2 releases of Windows Server 2012 and System Center VMM 2012.

An overall theme to Microsoft’s work with SDN (and 2012 R2 in general, for that matter) is that of the three clouds; the Private Cloud, the Service Provider Cloud and Azure. They all need to work in harmony, requiring as little effort from the customer as possible to get up and running.

If we start with Windows Server 2012 R2 there’s a new teaming mode called “Dynamic” that chops up the network flow into what Microsoft calls flowlets in order to be able to distribute the load of one continuous flow over multiple team members.

Hyper-V Networking (HNV) is now supported on a network team.

NVGRE, the technique used in Windows Server to implement network virtualization, can be offloaded to network cards with this feature. VMQ in combination with NVGRE will only work on NICs with the NVGRE task offload capability.

HNV is now a more integral part of a Hyper-V virtual switch which allows third-party extensions to see traffic flow from both consumer addresses (CA) as well as provider addresses (PA). HNV now also dynamically learns about new addresses in CA which means that 2012 R2 supports both DHCP servers as well as guest clusters on a virtualized network.

On that note it’s worth mentioning that the basic version of Cisco’s extension of the Hyper-V switch, called 1000V, is completely free to download and use – essentially turning your Hyper-V switch into a Cisco switch.

The Hyper-V switch has improved ACL and can now be set on port as well as on IP. In addition to this it’s also stateful.

A virtual machine can now use RSS inside the actual VM. This is called vRSS.

Microsoft was keen to point out that there had been a lot of work done in order to make SDN not only perform better but als easier to troubleshoot. There’s a new Powershell cmdlet called TestNetConnection that is ping, traceroute and nslookup all rolled into one and 2012 R2 allows you to ping a PA by using “ping -p”

In addition to this there’s a new Powershell cmdlet that allows you to test connectivity between VMs on the CA space.

2012 R2 boasts a replacement for the tried and true NetMon, called Message Analyzer. This new tool allows you, among a lot of other things, to in near real time monitor network traffic on a remote host.

There’s a built-in multi-tenant gateway between CA and PA in 2012 R2 that should be able to scale up to 100 tenants. IPAM in Windows Server 2012 R2 can now handle guest networks and can be integrated with VMM 2012 R2.

Continuing on with VMM 2012 R2 there is now support for managing top-of-rack switches from within VMM through the OMI standard. So far only Arista has announced support for this but other hardware vendors should follow. Look for the Microsoft certification logo.

This allows for a number of interesting features in VMM, one that was demonstrated was the ability to inspect the configuration of a switch and if needed remedy it from within VMM.

I’m not sure how many of my customers that can use all these features, and SDN in particular, but hopefully I’ll be able to experiment with this in my lab sometime this autumn in order to get some more experience with it.

TechEd 2013 Europe – An interlude   Leave a comment

Before I get into my post about virtual networking I realized that I need to clarify one particular piece of information that might not be clear to those that aren’t closely following the Microsoft information stream.

I’ve been pretty flippant about the possibility to online re-size a VHDX in a VM, mostly just stating that it’s about time this got added. What I, and a lot of others, are forgetting (or perhaps in some cases neglecting) to mention is that this is for a VHDX (only) that is attached to a SCSI controller (only).

The requirement for VHDX is no biggie, you should be using VHDX anyhow, but the requirement for a SCSI controller is a big one. Why? Because you can’t boot a Hyper-V VM off of a SCSI controller which means that you still can’t on-line increase the size of your boot disks.

With this said I’m immediately going to do a 180 and mention that in the new Generation 2 VMs you can boot off of a SCSI disk. This generation only supports Windows 8/8.1 and Windows Server 2012/2012 R2 though, limiting your options quite a bit.

Sure, you should be deploying WS 2012 anyhow but the matter of the fact is that a lot of companies haven’t even moved on to 2008 R2 yet. Some of my customers are still killing off their old Windows 2000 servers.

As an aside I’d like to point out that if you have a working private cloud infrastructure then you shouldn’t have to re-size your boot disk, ever. Just make it, say, 200 Gb, set it to dynamic and make sure that you’re monitoring your storage as well as your VMs.

The post about virtual networking will hopefully be up later today but no promises as I’m catching a flight back to Sweden later.

TechEd Europe 2013 – Storage   Leave a comment

Microsoft on storageOnwards and upwards (or, if using a logical depiction of infrastructure, downwards) to storage.

It should be obvious by now that Microsoft is very serious about storage and they’re using the term Software Defined Storage throughout their sessions. Windows Server 2012 introduced a number of great storage features and 2012 R2 expands on them.

The main point driven home by the sessions, and pretty much everything else Microsoft communicates, is that they want to enable you to build scalable enterprise solutions based on standardized, commodity hardware. The image to the right, taken from these slides, illustrate this vision.

Unlike the Hyper-V post I’m not quite sure on how to divide this post into sections so I’ll just rattle through the features to the best of my abilities.

The fact that Windows Server and System Center now has a synchronized release schedule means that VMM 2012 R2 is able to to a lot more when it comes to storage.

One of the bigger items is that it now can manage the entire Fibre Channel stack, from virtual HBAs in a VM to configuring a Cisco or Brocade SAN switch.

VMM 2012 R2 and Windows Server 2012 R2 uses a new management API called SM-API that is not only a lot faster but covers SMI-S, Storage Spaces as well as older devices. This means that VMM 2012 R2 also manages the entire Storage Spaces stack instead of just simple management of shares as in VMM 2012 SP1.

VMM 2102 R2 uses ODX, if possible, to deploy a VM from library to a host but not for anything else (we’ll see what happens until the product is released though).

VMM 2012 can bare metal deploy Hyper-V hosts and that functionality is extended to Scale Out File Servers now as well in 2012 R2. It sets up the entire thing for you, totally automated.

In VMM 2012 R2 you can classify storage on a per share level if you wish. A volume can be Silver, for instance, but a share can have additional features that increases the classification to Gold. These classifications are user configurable. Beyond this the actual fabric can now also be classified.

As mentioned in my previous post Windows Server 2012 R2 allows you to set a per VM Quality of Service for IOPS and the VM also has a new set of metrics (that follows the VM around) that should make it a lot easier to design a solution based on facts instead of more or less qualified guesses.

Also mentioned previously is the shared VHDX (I’ll abbreviate that to sVHDX from now on) but I’d like to expand a bit on the feature. As with previous guest clustering methods you can’t back up a guest cluster using host level backup, even with sVHDX.

Something else that doesn’t work with sVHDX is Hyper-V Replica and neither does Live Storage Migration. When I mentioned this to Ben Armstrong it was quite clear that Microsoft is very well aware of these limitations. Reading between the lines; working as hard as possible to remove them.

VMM can only create a sVHDX by using templates but the Hyper-V manager exposes this functionality as well if needed.

A VM can have a Write Back Cache in 2012 R2, also persistent with the machine. Differencing disks are also cached, leading to much faster deployment of VMs. The CSV cache can now be up to 80% of the RAM.

And, just to include it, you can now expand a VHDX online.

On to the actual file server in Windows Server 2012 R2.

A lot of work has been put into enhancing performance and demos showed 1.000.000+ IOPS with randomly read 8 KB packets. Even with 32 KB packets 2012 R2 delivers 500.000+ IOPS but, most importantly, 16+ GB/second of data transfer. Note that this is with Storage Spaces on standard hardware. SMB Direct has seen a performance boost overall – especially over networks faster than 10 Gbit.

De-duplication has been improved as well and now supports CSV volumes and live de-dupe for VHDX in a VDI scenario. Counter intuitive to what you might think the VMs actually boot faster in this scenario, thanks to efficient caching. If your Hyper-V host is directly attached to the storage you should never activate de-dupe though. Save that CPU for your VMs. Also, don’t enable de-dupe of VHDX for anything else than VDI scenarios if you want support from Microsoft.

The de-duplication is handled by the node that owns the CSV-volume where the VHDX resides and demos showed that is possible to put 50 VDI VMs on a low-cost, commodity SSD, giving them great boot performance.

One feature that I really like is the automatic tiering in Storage Spaces between SSD disks and mechanical disks (no other distinction is made) that makes sure that hot bits are migrated to SSD according to schedule, or manually if you so wish. Killer feature.

2012 R2 includes a SMB Bandwidth Manager that differentiates between traffic for Live Migration, VM storage and everything else. Similar to existing QoS but for SMB.

A Scale Out File Server cluster in 2012 R2 automatically balances the ownership and access of both CSV volumes as well as shares. This in conjunction with the fact that clients now connect to a share, instead of a host, means that a guest can leverage the SOFS cluster capacity much more efficiently.

There’s a new instance in a 2012 R2 SOFS cluster that is dedicated to managing CSV traffic, improving reliability of the cluster.

If you install the iSCSI role on a 2012 R2 server you get the SMI-S provider for iSCSI as well, instead of having to install it from VMM media as it is now.

When chatting briefly with Hans Vredevoort he mentioned that NetApp has a feature that converts VMDK (VMware hard drives) to VHDX in 30 seconds by simply changing the metadata for the disk, leaving the actual data alone. Sounds amazing and I’ll try to get some more information on this.

Finally, on a parting note, I’d like to mention that when I asked José Barreto if it’d be worth the effort to convince a colleague that works solely with traditional enterprise storage to come to TechEd 2014 he thought for a while and then said that yes, it’d be worth the effort.

To echo my opening statement; if should be obvious by now that Microsoft is serious about owning the storage layer as well and based on the hint that José gave I’m sure that if nothing else next year will be even more interesting.

For more on SMB3 and Windows Server 2012 R2 storage, visit José Barretos post on the subject.

Posted 30 June, 2013 by martinnr5 in Elsewhere, FYI, Operating system, Technical

Tagged with , , ,

TechEd Europe 2013 – Hyper-V   Leave a comment

I just counted and I have over 11 pages of handwritten notes1 from the sessions I went to so it’ll take me some time to compile them all into something coherent. This, in addition to the fact that my current theme of the blog doesn’t lend itself very well to long blog posts (though I normally try to go for quality over quantity), means that I’ll chunk the posts into a number of categories; Hyper-V, Networking and Storage as these are the main areas I focused on. Most likely I’ll end up with a “catch-all” post as well.

As the title of the post implies, let’s start with Hyper-V. Now, I know that there are numerous blog posts that cover what I’m about to cover but I’m summarizing the event for both colleagues and customers that weren’t able to attend so bear with me.

Hyper-V Replica

In 2012 R2 Hyper-V Replica has support for a third step in the replication process. The official name is Extended Hyper-V Replica and according to Ben Armstrong it was mainly service providers who asked for this to be implemented although I can see a number of my customers benefitting from this as well.

In order to manage large environments that implement Hyper-V Replica Microsoft developed Hyper-V Replica Manager (HRM), an Azure service that connects to your VMM servers and then provides Disaster Recovery protection through Hyper-V Replica on a VMM cloud level.

This requires a small agent to be installed on all VMM servers that are to be managed by the service. The VMM servers then configure the hosts, including adding the Replica functionality if needed (even the Replica Broker on clusters). After adding this agent you can add DR functionality in VM Templates in VMM.

Using HRM you can easily configure your DR protection and also orchestrate a failover including, among other things, the order you start-up your VMs and manual steps if needed. The service can be managed by your smart phone and there are no plans to allow organizations to deploy HRM internally.

Only the metadata used to coordinate the protection is ever communicated to the cloud, using certificates. All actual replication of VMs are strictly site to site, between your data centers.

If you manage to screw up your Hyper-V Replica configuration on the host level you need to manually sync with HRM to restore the settings. At least for now, R2 isn’t released yet so who knows what’ll change until then.

Finally; you are now allowed a bit more freedom when it comes to replication intervals; 30 seconds, 5 minutes and 15 minutes. Since there hasn’t been enough time to test, Microsoft won’t allow arbitrary replication intervals.


Linux guests now enjoy the same Dynamic Memory as Windows guests. Linux is now backed up using a file system freeze that gives a VSS alike functionality. Finally; the video driver when using VM connect to a Linux guest is new and vastly better than the old one.

I never got any information on what distros that’d be supported but my guess is that all guests with the “R2” integration components should be good to go.

Live Migration

One big thing in 2012 R2 is that Live Migration has seen some major performance improvements. By using compression (leveraging spare host CPU cycles) or SMB/RDMA Microsoft have seen consistent performance improvements of at least 40%, in some cases 200 to 300%.

The general rule of thumb is to activate compression for networks up to 10 Gbit and use SMB/RDMA on anything faster.

Speaking of Live Migration I’d like to mention a chat I had with Ben Armstrong about Live Storage Migration performance as I have a customer who sees issues with this. When doing a LSM of a VM you should expect 90% of the performance you get when doing a unbuffered copy to/from the same storage your VMs reside on. I might do a separate post on this just to elaborate.


In 2012 R2 you can now set Quality of Service for IOPS on a per VM level. Why no QoS for bandwidth? R2 isn’t finished so there’s still a possibility that it might show up.

One big feature, that should have been in 2012 RTM if you ask me (and numerous others), is that you can now expand a VHDX when the VM is online.

Another big feature (properly big this time) is guest clustering through a shared VHDX, in effect acting as virtual SAS storage inside your VM (using the latest, R2, integration services in your VM). More on this in my storage post though.

One more highly anticipated feature is active de-duplication of VHD/VHDX files when used in a VDI scenario. Why only VDI? Because Microsoft haven’t done enough testing. Feel free to de-dupe any VHDX you like but if things break and it’s not a VDI deployment, don’t call Microsoft. More on this in my storage post as well.

The rest

Ben Armstrong opened his session with the reflection that almost no customer is using all the features that Hyper-V 2012 offers. To me, that says a lot about the rich feature set of Hyper-V, a feature set that only gets richer in 2012 R2.

One really neat feature that shows how beneficial it is to own the entire ecosystem the way that Microsoft does is that all Windows Server 2012 R2 guests will automatically be activated if the host is running an activated data center edition of Windows Server 2012 R2. There are no plans to port this functionality to older OS, neither for guests nor for hosts.

In R2 you now get the full RDP experience, including USB redirection and audio/video, over SMBus. This means that you can connect to a VM and copy files, text, etc. even if you do not have network connectivity to the VM.

2012 R2 supports generation 2 VMs. Essentially a much slicker, simpler and more secure VM that has a couple of quirks. One being that only Windows Server 2012/2012 R2 and Windows 8/8.1 are supported as guest OSes as of now.

The seamless migration from 2012 to 2012 R2 is simply a Live Migration. The only other option is the Copy Cluster Roles wizard (new name in R2) which incurs downtime.

You can export or clone a VM while it’s running in 2012 R2.

Ben was asked the question if there’d ever be the option to hot-add a vCPU to a VM and the reply was that there really is no need for that feature. The reason being that Hyper-V has a very small penalty for having multiple vCPUs assigned to a VM. This is different from the best practices given by VMware where additional vCPUs does incur a penalty if they’re not used. The take-away is that you should alwasy deploy Hyper-V VMs with more than one vCPU.

During the “Meet the Experts” session I had the chance to sit down with Ben Armstrong and wax philosophically about Hyper-V. When I asked him about the continued innovation of Hyper-V he said that there’s another decades worth of features to implement. Makes me feel all warm inside.


As there haven’t been a lot of time between Windows Server 2012 and the upcoming release of 2012 R2 (roughly a year) the new or improved features might not be all that impressive when it comes to Hyper-V but I honestly think that they’re very useful and I already have a number of customer scenarios in mind where they’ll be of great use.

Not to mention that Hyper-V is only a small slice of the pie of new features in R2. I’ll be writing about some of the other slices tomorrow.

Most importantly though, and this was stressed by a fair number of speakers, is that Windows Server and System Center finally, for the first time ever, have a properly synchronized release schedule, allowing Microsoft to deliver on the promise of a fully functional solution for Private Clouds.

I agree with this notion as it’s been quite frustrating have to wait for System Center to play catch up with the OS, or vice versa.

With that, I bid you adieu. See you tomorrow.

1. Yeah, I prefer taking notes by hand as it allows me to be a lot more flexible, not to mention that I’m faster this way. I took advantage of the offer at this TechEd and bought both the Surface RT and Pro though so who knows, the next time I might use one of those devices.

TechEd Europe 2013, Prologue   Leave a comment

So, TechEd Europe 2013. My first techEd ever I might add so my recap of the week is not going to be similar to what other blogs might post. I’m pretty sure it’d be different even if this would’ve been my 10th TechEd, but I digress.

Tomorrow I’ll try to go into a little more detail of the sessions I’ve attended and the talks I’ve had with people as I right now just want to go grab a bite to eat and after that take a bath, finish the book I’m reading and then an early night (far too many late nights, as it were).

Going into TEE 2013 I wasn’t sure what to expect and right now, immediately after it’s over, I’m having mixed feelings about the event. Without a doubt these will coalesce into something else over time, perhaps by tomorrow already, so take them with a grain of salt.

The event itself was very well-organized, as should be expected, and IFEMA is a great conference hall (although their toilets make an awful racket when being flushed – seriously, prolonged exposure will give you impaired hearing). While on the subject of toilets, it was very disheartening as well as disgusting to see delegates leave a toilet stall without washing their hands.

Listen, I don’t care what you do at home but in public I expect you to behave as an adult.

The sessions are what gives me mixed feelings as most of them were quite good, some were great and some not all that impressive. I guess I was hoping for the level 300 and 400 sessions to be really technical but it turns out that I got the most useful and interesting information when talking to Microsoft, their partners and other delegates.

Talking to Microsoft, and especially to the “rock stars” like Ben Armstrong, José Barreto, Mark Russinovich, Jeffrey Woolsey and their ilks, wasn’t all that easy though as a lot of delegates didn’t care that there were others besides them that wanted to get a chance to ask questions and instead kept flapping their gums forever.

I understand why this happens, I really do, and I was probably guilty of it myself when chatting with Ben Armstrong during the final minutes of Ask the Experts but to my defense the guy who wanted to chime in was sitting behind me, quiet as a mouse.

Still, annoying as hell having to rush off to the next session having waited for the guy in front of me to stop yapping to no avail.

Would I recommend TEE to my colleagues and customers and others living in Europe? Absolutely. I know the “big one” is in North America but there are a number of pros with going to TEE instead.

First of all; it’s a much shorter (not to mention cheaper) trip. You can spend that time doing something a lot more inspiring than sitting on a plane. Second; all the big names are at TEE as well. There were a couple of speakers that were missing, perhaps because Build was this week as well, but as I mentioned above – all the headlining names were present and accounted for.

Last, but not least; as TechEd North America is held before TEE all the kinks in the keynotes, sessions, hands-on labs, demos, etc. have a chance to be worked out in time for TEE. Because of the scheduling you also have the chance to look through the comments and feedback to the material posted on Channel 9 and even check out some of the recorded sessions if you want to make sure that you attend the right ones.

Ok, enough of this – I’m off to find me some tapas and a beer or two. I’ll be spending the weekend i Madrid but it’d be a shame to let this lovely weather go to waste.

Hasta luego!

Posted 28 June, 2013 by martinnr5 in Elsewhere, Opinion

Tagged with ,

A short notice on TechEd   Leave a comment

I’ll try and post something in depth about TechEd tomorrow, there haven’t been time to do it so far.

Posted 26 June, 2013 by martinnr5 in Elsewhere, FYI

Tagged with

%d bloggers like this: