Archive for the ‘Documentation’ Category

A short note about AD detached clusters in Windows Server 2012 R2   Leave a comment


Yesterday Microsoft posted an article about a new way to deploy clusters in Windows Server 2012 R2 called Active Directory Detached Clusters. As the name implies this type of cluster do not rely on your AD in order to operate, instead using DNS for the Computer Name Objects and the Virtual Computer Objects.

This is great news as I’ve had several clusters acting up due to the domain controller not being reachable but there is one important caveat with this mode:

The intra-cluster communication would continue to use Kerberos for authentication, however, the authentication of the CNO would be done using NT LM authentication. Thus, you need to remember that for all Cluster roles that need Kerberos Authentication use of AD-detached cluster is not recommended.

This means that Live Migration isn’t supported for a Hyper-V cluster, only Quick Migration.

More information here.

Posted 25 March, 2014 by martinnr5 in Documentation, Operating system, Technical

Tagged with , ,

A gathering of links, part 3   Leave a comment


Sorry for the lack of content. I have something I can write about, I think, but work is getting in the way.

For now, a gathering of links instead.

Cripes! I need to do these more often, this took me forever.

If you find these useful, please rate this blog post or leave a comment. There’s really no need for me to spend my morning doing this if no-one’s going to read it. 🙂

A gathering of links, part 2   Leave a comment


I’m not sure that I should keep using the “part n” moniker when naming these posts but for now that’s the best I got. We’ll see what happens further down the road.

During my vacation and the couple of weeks I’ve been working I’ve collected quite a few interesting links:

  • Steven Ekren describes in detail a new feature in Hyper-V 2012 (that I’ve missed) that Live Migrates a VM if a critical VM network fails.
  • Jose Barreto explains how to manage the new SMB3 features in Windows Server 2012 R2 through PowerShell.
  • Thomas Maurer has a step-by-step post on how to set up the new network virtualization gateway in Hyper-V 2012 R2.
  • Ben Armstrong details how to use Powershell in order to keep your Hyper-V replicas up to date with the source VM.
  • If you’re interested in Dell’s new PowerEdge VRTX cluster in a box, check out this article on the Microsoft Storage team blog.
  • Over at Hyper-V.nu Marc van Eijk has a really interesting article series on how to set up Hyper-V hosts using bare metal deployment in Windows Server 2012 R2 and System Center VMM 2012 R2. So far part 1 and part 2 have been posted.
  • Didier Van Hoye takes a thorough look at RDMA and not only talks the talk, he walks the walk as well. Funny and informative, as always.
  • vNiklas rescues missing VMs after a storage migration job went haywire. Strange enough he didn’t use one line of Powershell though. 🙂
  • Ben Armstrong shows how to import VMs to a new Hyper-V server without encountering issues due to incompatibilities by using some clever Powershell.
  • If you still need more RDMA check out Thomas Maurers post on the subject.
  • If you want to use Powershell in order to copy files to a VM using the new RDP over SMBus functionality in 2012 R2, vNiklas got you covered.
  • Another post from Thomas Maurer, this time he explains the features of the Cisco UCS Manager version 1.0.1 add-in for VMM and how to install it.
  • Finally, a post about CSV cache from Elden Christensen over at the Failover Clustering and Network Load Balancing Team Blog.

Posted 13 September, 2013 by martinnr5 in A gathering of links, Documentation, Elsewhere, FYI, Technical

Tagged with , , , , , ,

DPM storage calculations   Leave a comment


I’m posting this as I couldn’t find a good single point of reference for how to calculate how much storage your DPM implementation might need.

First off; here are the formulas that DPM uses to calculate default storage space if you want to do the math yourself. Sometimes this is the fastest way if you only need a rough estimate for a simple workload. These don’t take data growth into consideration though.

If you need more complex calculations there are a couple of calculators for DPM:

As mentioned above none of these are for DPM 2012 but if all you need is an accurate estimate of how much storage a DPM implementation will use they’ll do just fine.

Something else worth mentioning is that they don’t take into consideration the new limits of Hyper-V in Windows Server 2012 but if you need to protect a cluster larger than 16 nodes you’d probably want to do the math on your own just to be sure anyhow. 🙂

The first calculator is the most detailed but only covers Exchange up to version 2007. I never use this my self.

The DPM Volume Sizing Tool is actually a set of scripts and Excel sheets that you use to gather actual data from your environment if you want to, along with a couple of Word documents on how to get the ball rolling.

The latest version of the stand alone calculators for DPM 2010 are more detailed than the DPM Volume Sizing Tool but the Exchange calculator is not as detailed as the older one for Exchange and DPM 2007. In addition, these only cover a few of the workloads that DPM can protect.

Personally I do the math myself and if I need to use a calculator I manually enter values into the Excel calculator from the DPM Volume Sizing Tool as this calculator both handles all workloads that DPM can protect and also gives me a good summary of the storage needed.

It’d be nice to see Microsoft develop a single Excel calculator for all workloads and DPM 2012 but that doesn’t seem likely so we’ll make do with what we got.

Posted 12 September, 2013 by martinnr5 in Documentation, Elsewhere, Technical, Tools

Tagged with , ,

A gathering of links, part 1   Leave a comment


As I’ve mentioned previously, I’m not a big fan of regurgitating content from other blogs as I gather that you already subscribe to posts by Microsoft in particular but also posts on other blogs of note.

Still, this blog is one way of providing my colleagues with ongoing information about Hyper-V, Windows Server and System Center (or at least those components I find interesting) which means that I from time to time will post a round-up of links that I’ve collected.

There will be no real order or grouping to these links, at least not for now.

All hosts receive a zero rating when you try to place the VM on a Hyper-V cluster in Virtual Machine Manager – If you have a setup where you use VLAN isolation on one logical network and no isolation on another you could happen upon this known issue in VMM 2012 SP1.

Ben Armstrong posts about a nifty little trick allowing you to set your VMs resolution to something else than the standard ones. Useful when you run client Hyper-V.

Neela Syam Kolli, Program Manager for DPM, posted a really interesting article called “How to plan for protecting VMs in private cloud deployments?” There’s quite a bit of useful information in this post but also some solid advice on how to plan your VM to CSV ratio, a question that I’ve been asked by customers and colleagues alike so let me go into a bit more detail on this.

From the article (my emphasis):

Whenever a snapshot is taken on a VM, snapshot is triggered on the whole CSV volume. As you may know, snapshot means that the data content at that point in time are preserved till the lifetime of the snapshot. DPM keeps snapshot on the volume on PS till the backup data transfer is complete. Once the backup data transfer is done, DPM removes that snapshot. In order to keep the content alive, any subsequent writes to that snapshot will cause volsnap to read old content, write to a new location and write the new location to the target location. For ex., if block 10 is being written to a volume where there is a live snapshot, VolSnap will read current block 10 and write it to a new location and write new content to the 10th block. This means that as long as the snapshot is active, each write will lead to one read and two writes. This kind of operation is called Copy On Write (COW). Even though the snapshot is taken on a VM, actual snapshot is happening on whole volume. So, all VMs that are residing in that CSV will have the IO penalty due to COW. So, it is advisable to have as less number of CSVs as possible to reduce the impact of backup of this VM on other VMs.  Also, as a VM snapshot will include snapshot across all CSVs that has this VM’s VHDs, the less the number of CSVs used for a VM the better in terms of backup performance.

Despite the language being a bit on the rough side the way I interpret this is that it’s preferable to have as few VMs as possible per CSV due to the COW impact on all VHDs on a CSV. Additionally, keep all your VHDs for a VM on the same CSV as all CSVs that host a VHD for the VM you’re protecting will suffer the COW performance hit.

The response I’ve gotten from Microsoft and the community regarding number of VMs per CSV is “as many as possible until your storage can’t handle the load” which is perfectly logical but also very vague. If you use DPM to protect your VMs you now have a bit more to lean on when sizing the environment. There is of course a reason that we use CSV and not a 1:1 ratio of volumes to VHDs so don’t go overboard with this recommendation.

Moving on.

This one is pretty hardcore but interesting none the less; How to live debug a VM in Hyper-V. If nothing else, it shows that there’s still a use for COM-ports in a VM.

vNiklas is as always blogging relentlessly about how to use PowerShell for pretty much everything in life. This one is about how to re-size a VHDX while it’s still online in the VM.

If you want to know more about how NIC teaming in Windows Server 2012 works then Didier van Hoye takes a look at how Live Migration uses various configurations of the NIC teaming feature. A very informative post with a surprising result!

Another post from Ben Armstrong, this time he does some troubleshooting when his VMs hover longer than expected at 2 % when starting up. Turns out that it’s due to a name resolution issue.

Microsoft and Cisco have produced a white paper on the new Cisco N1000V Hyper-V switch extension. It’s not very technical, although informative if you need to know a little bit more about how the N100V and the Hyper-V switch relate to each other.

The Hyper-V Management Pack Extensions 2012 for System Center Operations Manager 2012/2012 SP1 gives you even more options when monitoring Hyper-V with SCOM.

I mentioned that the post about how to live debug a VM was pretty hardcore but I’d like to revise that statement. Compared to this document called “Hypervisor Top-Level Functional Specification 3.0a: Windows Server 2012” that post is as hardcore as non-fat milk. Let me quote a select piece of text from the specification:

The guest reads CPUID leaf 0x40000000 to determine the maximum hypervisor CPUID leaf (returned in register EAX) and CPUID leaf 0x40000001 to determine the interface signature (returned in register EAX). It verifies that the maximum leaf value is at least 0x40000005 and that the interface signature is equal to “Hv#1”. This signature implies that HV_X64_MSR_GUEST_OS_ID, HV_X64_MSR_HYPERCALL and HV_X64_MSR_VP_INDEX are implemented.

My thoughts exactly! And the specification is 418 pages long so it should last you all through your vacation.

Finally (as I’m out of time and starting to get to stuff that’s a bit old), Peter Noorderijk writes about problems using Broadcom or Emulex 10 Gbit NICs. They resolved the issue by adding Intel NICs but another workaround is to turn off checksum offloading. Updated Emulex and Broadcom drives should be expected.

TechEd 2013 Europe – An interlude   Leave a comment


Before I get into my post about virtual networking I realized that I need to clarify one particular piece of information that might not be clear to those that aren’t closely following the Microsoft information stream.

I’ve been pretty flippant about the possibility to online re-size a VHDX in a VM, mostly just stating that it’s about time this got added. What I, and a lot of others, are forgetting (or perhaps in some cases neglecting) to mention is that this is for a VHDX (only) that is attached to a SCSI controller (only).

The requirement for VHDX is no biggie, you should be using VHDX anyhow, but the requirement for a SCSI controller is a big one. Why? Because you can’t boot a Hyper-V VM off of a SCSI controller which means that you still can’t on-line increase the size of your boot disks.

With this said I’m immediately going to do a 180 and mention that in the new Generation 2 VMs you can boot off of a SCSI disk. This generation only supports Windows 8/8.1 and Windows Server 2012/2012 R2 though, limiting your options quite a bit.

Sure, you should be deploying WS 2012 anyhow but the matter of the fact is that a lot of companies haven’t even moved on to 2008 R2 yet. Some of my customers are still killing off their old Windows 2000 servers.

As an aside I’d like to point out that if you have a working private cloud infrastructure then you shouldn’t have to re-size your boot disk, ever. Just make it, say, 200 Gb, set it to dynamic and make sure that you’re monitoring your storage as well as your VMs.

The post about virtual networking will hopefully be up later today but no promises as I’m catching a flight back to Sweden later.

Some thoughts on cost in the Private Cloud   Leave a comment


The Microsoft Reference Architecture for Private Cloud lists – among a a lot of other very useful and interesting things – a couple of examples of business drivers related to the agility (previously known as time), cost and quality axes:

Agility

  • Reduce Time to Market:Implement new business solutions more quickly so revenue comes in faster.
  • Better Enable the Solution Development Life Cycle:Speed up business solutions through better facilitation for development and testing and overall faster paths to production.
  • Respond to Business Change: New requirements of existing business solutions are met more quickly.

Cost

  • Reduce Operational Costs:Lower daily operational costs for basic needs such as people, power, and space.
  • Reduce Capital Costs or Move to Annuity-Based Operational Costs:Reduced IT physical assets by using more pay-per-use services.
  • Transparency of IT Costs: Customers are more aware of what they get for their money.

Quality

  • Consistently Deliver to Better-Defined Service Levels:Leads to increased customer satisfaction.
  • Provide Better Continuity of Service:Minimize service interruptions.
  • Regulatory Compliance: Meeting or exceeding mandatory requirements, which may grow more complex with online services.

The cost examples reminded me of a discussion I hade with a colleague who insisted that the best way to make money off a customer when it comes to the private cloud (the customer being “the business”) is to charge for capacity even if the customer doesn’t use it.

This is in my opinion the total opposite of what the private cloud is about.

By being totally transparent with the amount of resources the customer is using they can in turn fine tune their demands and needs accordingly.

If we up front charge the customer for 32 Gb RAM, 500 Gb of disk and 4 vCPUs even though they use only a fraction of it then there is no real way of knowing what IT actually costs the business.

It might also prevent them from requesting the VM to begin with, instead perhaps re-using an existing VM or finding another – often unsupported and trouble prone – solution.

This means that you should always charge the customer per Mb of RAM, Gb of disk and MHz of vCPU utilized. This is one aspect of the measured service characteristic of NISTs private cloud definition.

Make no mistake, you should still make a profit on Mb, Gb and MHz of course but the business should be able to see exactly how much their VMs cost them at any given time.

The Private Cloud Principles, Patterns and Concepts documentation also has a section about this.

One very interesting point that documentation makes is that by providing the business with on-going reports on how much their servers actually cost per month there’s (hopefully) an incentive to actually phase out old systems and services in order to reduce cost.

Transparency and honesty is always the best way to create a reliable long-term relation with your customers, and especially so when it comes to costs.

Three MSPC Fast Track guides released   Leave a comment


Three Microsoft Private Cloud Fast Track guides have been released.

The Microsoft Private Cloud Fast Track program is:

… a joint effort between Microsoft and its hardware partners. The goal of the program is to help organizations decrease the time, complexity, and risk of implementing private clouds. The program provides:

  • Reference implementation guidance: Lab-tested and validated guidance for implementing multiple Microsoft products and technologies with hardware that meets specific, minimum, hardware vendor-agnostic requirements. Customers can use this guidance to implement a private cloud solution with hardware they already own, or that they purchase.
  • Reference implementations: Microsoft hardware partners define physical architectures with computing, network, storage, and value-added software components that meet (or exceed) the minimum hardware requirements defined in the reference implementation guidance. Each implementation is then validated with Microsoft and made available for purchase to customers. Further details can be found by reading the information at Private Cloud How To Buy.

These guides detail the architecture of a Fast Track solution, the operations you perform on a daily basis in your cloud as well as how to actually deploy a private cloud according to the Fast Track program. Together they comprise the Private Cloud Fast Track Reference Implementation Guidance Set.

They’re quite hefty so it’ll take me some time to go through them but on the other hand they’ll make for excellent summer reading.

 

Posted 25 July, 2012 by martinnr5 in Documentation, The cloud

Tagged with , , , ,

Infrastructure Planning and Design Guide for System Center 2012 – Virtual Machine Manager now available   Leave a comment


The Infrastructure Planning and Design Guide for System Center 2012 – Virtual Machine Manager is now available for download.

I haven’t taken a look at it myself but hope to be able to do so tomorrow.

Here’s a short description of it from the official announcement:

This guide outlines the elements that are crucial to an optimized design of Virtual Machine Manager. It leads you through a process of identifying the business and technical requirements for managing virtualization, designing integration with Operations Manager if required, and then determining the number, size, and placement of the VMM servers. This guide helps you to confidently plan for the centralized administration of physical and virtual machines.

Posted 24 July, 2012 by martinnr5 in Documentation, Tools

Tagged with ,

New document from Microsoft IT on SharePoint virtualization   Leave a comment


Microsoft IT has released a new document on how and why they internally virtualized SharePoint.

Microsoft IT piloted the deployment of a virtualized Microsoft SharePoint 2010 environment using the Compute Service, which provides high density compute servers and virtual machines (VMs) to the business. The SharePoint team saw the collaboration as a way to reduce operational costs and complexity, and the Compute Service team viewed it as an opportunity to significantly enhance its infrastructure capabilities. In addition to driving down costs, the partnership helped mature the Compute Service, because it allowed the team to identify and address a business gap by upgrading its physical infrastructure.

It’s not very in-depth nor technical but still worth a read. Grab it here.

Posted 11 June, 2011 by martinnr5 in Documentation

Tagged with ,