Archive for the ‘storage’ Tag

A gathering of links, part 3   Leave a comment


Sorry for the lack of content. I have something I can write about, I think, but work is getting in the way.

For now, a gathering of links instead.

Cripes! I need to do these more often, this took me forever.

If you find these useful, please rate this blog post or leave a comment. There’s really no need for me to spend my morning doing this if no-one’s going to read it. 🙂

DPM storage calculations   Leave a comment


I’m posting this as I couldn’t find a good single point of reference for how to calculate how much storage your DPM implementation might need.

First off; here are the formulas that DPM uses to calculate default storage space if you want to do the math yourself. Sometimes this is the fastest way if you only need a rough estimate for a simple workload. These don’t take data growth into consideration though.

If you need more complex calculations there are a couple of calculators for DPM:

As mentioned above none of these are for DPM 2012 but if all you need is an accurate estimate of how much storage a DPM implementation will use they’ll do just fine.

Something else worth mentioning is that they don’t take into consideration the new limits of Hyper-V in Windows Server 2012 but if you need to protect a cluster larger than 16 nodes you’d probably want to do the math on your own just to be sure anyhow. 🙂

The first calculator is the most detailed but only covers Exchange up to version 2007. I never use this my self.

The DPM Volume Sizing Tool is actually a set of scripts and Excel sheets that you use to gather actual data from your environment if you want to, along with a couple of Word documents on how to get the ball rolling.

The latest version of the stand alone calculators for DPM 2010 are more detailed than the DPM Volume Sizing Tool but the Exchange calculator is not as detailed as the older one for Exchange and DPM 2007. In addition, these only cover a few of the workloads that DPM can protect.

Personally I do the math myself and if I need to use a calculator I manually enter values into the Excel calculator from the DPM Volume Sizing Tool as this calculator both handles all workloads that DPM can protect and also gives me a good summary of the storage needed.

It’d be nice to see Microsoft develop a single Excel calculator for all workloads and DPM 2012 but that doesn’t seem likely so we’ll make do with what we got.

Posted 12 September, 2013 by martinnr5 in Documentation, Elsewhere, Technical, Tools

Tagged with , ,

TechEd 2013 Europe – An interlude   Leave a comment


Before I get into my post about virtual networking I realized that I need to clarify one particular piece of information that might not be clear to those that aren’t closely following the Microsoft information stream.

I’ve been pretty flippant about the possibility to online re-size a VHDX in a VM, mostly just stating that it’s about time this got added. What I, and a lot of others, are forgetting (or perhaps in some cases neglecting) to mention is that this is for a VHDX (only) that is attached to a SCSI controller (only).

The requirement for VHDX is no biggie, you should be using VHDX anyhow, but the requirement for a SCSI controller is a big one. Why? Because you can’t boot a Hyper-V VM off of a SCSI controller which means that you still can’t on-line increase the size of your boot disks.

With this said I’m immediately going to do a 180 and mention that in the new Generation 2 VMs you can boot off of a SCSI disk. This generation only supports Windows 8/8.1 and Windows Server 2012/2012 R2 though, limiting your options quite a bit.

Sure, you should be deploying WS 2012 anyhow but the matter of the fact is that a lot of companies haven’t even moved on to 2008 R2 yet. Some of my customers are still killing off their old Windows 2000 servers.

As an aside I’d like to point out that if you have a working private cloud infrastructure then you shouldn’t have to re-size your boot disk, ever. Just make it, say, 200 Gb, set it to dynamic and make sure that you’re monitoring your storage as well as your VMs.

The post about virtual networking will hopefully be up later today but no promises as I’m catching a flight back to Sweden later.

TechEd Europe 2013 – Storage   Leave a comment


Microsoft on storageOnwards and upwards (or, if using a logical depiction of infrastructure, downwards) to storage.

It should be obvious by now that Microsoft is very serious about storage and they’re using the term Software Defined Storage throughout their sessions. Windows Server 2012 introduced a number of great storage features and 2012 R2 expands on them.

The main point driven home by the sessions, and pretty much everything else Microsoft communicates, is that they want to enable you to build scalable enterprise solutions based on standardized, commodity hardware. The image to the right, taken from these slides, illustrate this vision.

Unlike the Hyper-V post I’m not quite sure on how to divide this post into sections so I’ll just rattle through the features to the best of my abilities.

The fact that Windows Server and System Center now has a synchronized release schedule means that VMM 2012 R2 is able to to a lot more when it comes to storage.

One of the bigger items is that it now can manage the entire Fibre Channel stack, from virtual HBAs in a VM to configuring a Cisco or Brocade SAN switch.

VMM 2012 R2 and Windows Server 2012 R2 uses a new management API called SM-API that is not only a lot faster but covers SMI-S, Storage Spaces as well as older devices. This means that VMM 2012 R2 also manages the entire Storage Spaces stack instead of just simple management of shares as in VMM 2012 SP1.

VMM 2102 R2 uses ODX, if possible, to deploy a VM from library to a host but not for anything else (we’ll see what happens until the product is released though).

VMM 2012 can bare metal deploy Hyper-V hosts and that functionality is extended to Scale Out File Servers now as well in 2012 R2. It sets up the entire thing for you, totally automated.

In VMM 2012 R2 you can classify storage on a per share level if you wish. A volume can be Silver, for instance, but a share can have additional features that increases the classification to Gold. These classifications are user configurable. Beyond this the actual fabric can now also be classified.

As mentioned in my previous post Windows Server 2012 R2 allows you to set a per VM Quality of Service for IOPS and the VM also has a new set of metrics (that follows the VM around) that should make it a lot easier to design a solution based on facts instead of more or less qualified guesses.

Also mentioned previously is the shared VHDX (I’ll abbreviate that to sVHDX from now on) but I’d like to expand a bit on the feature. As with previous guest clustering methods you can’t back up a guest cluster using host level backup, even with sVHDX.

Something else that doesn’t work with sVHDX is Hyper-V Replica and neither does Live Storage Migration. When I mentioned this to Ben Armstrong it was quite clear that Microsoft is very well aware of these limitations. Reading between the lines; working as hard as possible to remove them.

VMM can only create a sVHDX by using templates but the Hyper-V manager exposes this functionality as well if needed.

A VM can have a Write Back Cache in 2012 R2, also persistent with the machine. Differencing disks are also cached, leading to much faster deployment of VMs. The CSV cache can now be up to 80% of the RAM.

And, just to include it, you can now expand a VHDX online.

On to the actual file server in Windows Server 2012 R2.

A lot of work has been put into enhancing performance and demos showed 1.000.000+ IOPS with randomly read 8 KB packets. Even with 32 KB packets 2012 R2 delivers 500.000+ IOPS but, most importantly, 16+ GB/second of data transfer. Note that this is with Storage Spaces on standard hardware. SMB Direct has seen a performance boost overall – especially over networks faster than 10 Gbit.

De-duplication has been improved as well and now supports CSV volumes and live de-dupe for VHDX in a VDI scenario. Counter intuitive to what you might think the VMs actually boot faster in this scenario, thanks to efficient caching. If your Hyper-V host is directly attached to the storage you should never activate de-dupe though. Save that CPU for your VMs. Also, don’t enable de-dupe of VHDX for anything else than VDI scenarios if you want support from Microsoft.

The de-duplication is handled by the node that owns the CSV-volume where the VHDX resides and demos showed that is possible to put 50 VDI VMs on a low-cost, commodity SSD, giving them great boot performance.

One feature that I really like is the automatic tiering in Storage Spaces between SSD disks and mechanical disks (no other distinction is made) that makes sure that hot bits are migrated to SSD according to schedule, or manually if you so wish. Killer feature.

2012 R2 includes a SMB Bandwidth Manager that differentiates between traffic for Live Migration, VM storage and everything else. Similar to existing QoS but for SMB.

A Scale Out File Server cluster in 2012 R2 automatically balances the ownership and access of both CSV volumes as well as shares. This in conjunction with the fact that clients now connect to a share, instead of a host, means that a guest can leverage the SOFS cluster capacity much more efficiently.

There’s a new instance in a 2012 R2 SOFS cluster that is dedicated to managing CSV traffic, improving reliability of the cluster.

If you install the iSCSI role on a 2012 R2 server you get the SMI-S provider for iSCSI as well, instead of having to install it from VMM media as it is now.

When chatting briefly with Hans Vredevoort he mentioned that NetApp has a feature that converts VMDK (VMware hard drives) to VHDX in 30 seconds by simply changing the metadata for the disk, leaving the actual data alone. Sounds amazing and I’ll try to get some more information on this.

Finally, on a parting note, I’d like to mention that when I asked José Barreto if it’d be worth the effort to convince a colleague that works solely with traditional enterprise storage to come to TechEd 2014 he thought for a while and then said that yes, it’d be worth the effort.

To echo my opening statement; if should be obvious by now that Microsoft is serious about owning the storage layer as well and based on the hint that José gave I’m sure that if nothing else next year will be even more interesting.

For more on SMB3 and Windows Server 2012 R2 storage, visit José Barretos post on the subject.

Posted 30 June, 2013 by martinnr5 in Elsewhere, FYI, Operating system, Technical

Tagged with , , ,

TechEd Europe 2013 – Hyper-V   Leave a comment


I just counted and I have over 11 pages of handwritten notes1 from the sessions I went to so it’ll take me some time to compile them all into something coherent. This, in addition to the fact that my current theme of the blog doesn’t lend itself very well to long blog posts (though I normally try to go for quality over quantity), means that I’ll chunk the posts into a number of categories; Hyper-V, Networking and Storage as these are the main areas I focused on. Most likely I’ll end up with a “catch-all” post as well.

As the title of the post implies, let’s start with Hyper-V. Now, I know that there are numerous blog posts that cover what I’m about to cover but I’m summarizing the event for both colleagues and customers that weren’t able to attend so bear with me.

Hyper-V Replica

In 2012 R2 Hyper-V Replica has support for a third step in the replication process. The official name is Extended Hyper-V Replica and according to Ben Armstrong it was mainly service providers who asked for this to be implemented although I can see a number of my customers benefitting from this as well.

In order to manage large environments that implement Hyper-V Replica Microsoft developed Hyper-V Replica Manager (HRM), an Azure service that connects to your VMM servers and then provides Disaster Recovery protection through Hyper-V Replica on a VMM cloud level.

This requires a small agent to be installed on all VMM servers that are to be managed by the service. The VMM servers then configure the hosts, including adding the Replica functionality if needed (even the Replica Broker on clusters). After adding this agent you can add DR functionality in VM Templates in VMM.

Using HRM you can easily configure your DR protection and also orchestrate a failover including, among other things, the order you start-up your VMs and manual steps if needed. The service can be managed by your smart phone and there are no plans to allow organizations to deploy HRM internally.

Only the metadata used to coordinate the protection is ever communicated to the cloud, using certificates. All actual replication of VMs are strictly site to site, between your data centers.

If you manage to screw up your Hyper-V Replica configuration on the host level you need to manually sync with HRM to restore the settings. At least for now, R2 isn’t released yet so who knows what’ll change until then.

Finally; you are now allowed a bit more freedom when it comes to replication intervals; 30 seconds, 5 minutes and 15 minutes. Since there hasn’t been enough time to test, Microsoft won’t allow arbitrary replication intervals.

Linux

Linux guests now enjoy the same Dynamic Memory as Windows guests. Linux is now backed up using a file system freeze that gives a VSS alike functionality. Finally; the video driver when using VM connect to a Linux guest is new and vastly better than the old one.

I never got any information on what distros that’d be supported but my guess is that all guests with the “R2” integration components should be good to go.

Live Migration

One big thing in 2012 R2 is that Live Migration has seen some major performance improvements. By using compression (leveraging spare host CPU cycles) or SMB/RDMA Microsoft have seen consistent performance improvements of at least 40%, in some cases 200 to 300%.

The general rule of thumb is to activate compression for networks up to 10 Gbit and use SMB/RDMA on anything faster.

Speaking of Live Migration I’d like to mention a chat I had with Ben Armstrong about Live Storage Migration performance as I have a customer who sees issues with this. When doing a LSM of a VM you should expect 90% of the performance you get when doing a unbuffered copy to/from the same storage your VMs reside on. I might do a separate post on this just to elaborate.

Storage

In 2012 R2 you can now set Quality of Service for IOPS on a per VM level. Why no QoS for bandwidth? R2 isn’t finished so there’s still a possibility that it might show up.

One big feature, that should have been in 2012 RTM if you ask me (and numerous others), is that you can now expand a VHDX when the VM is online.

Another big feature (properly big this time) is guest clustering through a shared VHDX, in effect acting as virtual SAS storage inside your VM (using the latest, R2, integration services in your VM). More on this in my storage post though.

One more highly anticipated feature is active de-duplication of VHD/VHDX files when used in a VDI scenario. Why only VDI? Because Microsoft haven’t done enough testing. Feel free to de-dupe any VHDX you like but if things break and it’s not a VDI deployment, don’t call Microsoft. More on this in my storage post as well.

The rest

Ben Armstrong opened his session with the reflection that almost no customer is using all the features that Hyper-V 2012 offers. To me, that says a lot about the rich feature set of Hyper-V, a feature set that only gets richer in 2012 R2.

One really neat feature that shows how beneficial it is to own the entire ecosystem the way that Microsoft does is that all Windows Server 2012 R2 guests will automatically be activated if the host is running an activated data center edition of Windows Server 2012 R2. There are no plans to port this functionality to older OS, neither for guests nor for hosts.

In R2 you now get the full RDP experience, including USB redirection and audio/video, over SMBus. This means that you can connect to a VM and copy files, text, etc. even if you do not have network connectivity to the VM.

2012 R2 supports generation 2 VMs. Essentially a much slicker, simpler and more secure VM that has a couple of quirks. One being that only Windows Server 2012/2012 R2 and Windows 8/8.1 are supported as guest OSes as of now.

The seamless migration from 2012 to 2012 R2 is simply a Live Migration. The only other option is the Copy Cluster Roles wizard (new name in R2) which incurs downtime.

You can export or clone a VM while it’s running in 2012 R2.

Ben was asked the question if there’d ever be the option to hot-add a vCPU to a VM and the reply was that there really is no need for that feature. The reason being that Hyper-V has a very small penalty for having multiple vCPUs assigned to a VM. This is different from the best practices given by VMware where additional vCPUs does incur a penalty if they’re not used. The take-away is that you should alwasy deploy Hyper-V VMs with more than one vCPU.

During the “Meet the Experts” session I had the chance to sit down with Ben Armstrong and wax philosophically about Hyper-V. When I asked him about the continued innovation of Hyper-V he said that there’s another decades worth of features to implement. Makes me feel all warm inside.

Conclusion

As there haven’t been a lot of time between Windows Server 2012 and the upcoming release of 2012 R2 (roughly a year) the new or improved features might not be all that impressive when it comes to Hyper-V but I honestly think that they’re very useful and I already have a number of customer scenarios in mind where they’ll be of great use.

Not to mention that Hyper-V is only a small slice of the pie of new features in R2. I’ll be writing about some of the other slices tomorrow.

Most importantly though, and this was stressed by a fair number of speakers, is that Windows Server and System Center finally, for the first time ever, have a properly synchronized release schedule, allowing Microsoft to deliver on the promise of a fully functional solution for Private Clouds.

I agree with this notion as it’s been quite frustrating have to wait for System Center to play catch up with the OS, or vice versa.

With that, I bid you adieu. See you tomorrow.

1. Yeah, I prefer taking notes by hand as it allows me to be a lot more flexible, not to mention that I’m faster this way. I took advantage of the offer at this TechEd and bought both the Surface RT and Pro though so who knows, the next time I might use one of those devices.

Disk Space Fan   Leave a comment


Yesterday I helped a customer figure out why one of his virtual hard drives was full when Windows couldn’t find enough files to justify the fullness. This wasn’t due to anything related to Hyper-V, it instead turned out to be a 40 Gb large cache file that SAP had generated and not removed.

If you, as I once did, know how frustrating it can be to try to track down where a 40 Gb cache file or those gazillion Java installation packages reside I can heartily recommend Disk Space Fan.

It’s a great tool for analyzing how the space of your hard drive is used, and pretty to look at as well.

At least according to me.

Posted 3 May, 2011 by martinnr5 in Tools

Tagged with , ,

%d bloggers like this: