Archive for the ‘The cloud’ Category

TechEd Europe 2013 – The rest   Leave a comment


Here’s the rest of the stuff I picked up at TechEd Europe that I felt that I couldn’t fit into any of the other posts. This will not be arranged in any particular order so apologies in advance for that. Also, this isn’t a very long post as most of the important stuff is included in my other TechEd posts.

One thing I’m really interested in is using the cloud – which in my case means Azure – to host your test and QA environments. I have one customer in particular that could really benefit from this as they A) have a huge amount of test and QA server in their own (static) data center which equates to a lot of wasted resources and B) haven’t got enough test and QA environments, resulting in the dreaded “test in production” syndrome.

This customer is quite large though and as I mentioned their data center is anything but dynamic so introducing this model is going to be a huge uphill struggle, both from an economic standpoint as well as a political one.

It is very interesting though so I’ll work on a small and simple proposal to test the waters and see what their reaction is.

Speaking of Azure, the new Windows Azure Pack for your private cloud is another interesting subject but I haven’t had the time to read up on it. I just wanted to mention that one idea that Microsoft presented was to use it as a portal for VMM but when I questioned them why this was a better idea than using App Controller or Service Manager I couldn’t really get a straight answer from them.

As I said though, an interesting subject and I’ll try to get back to it later on.

Finally a short note on the new quorum model in Windows Server 2012 R2. The model is very simple now and is just a vote majority where both nodes and the witness disk gets a vote. The dynamic quorum model automatically calculates what gets a vote though so the best practice from Microsoft is to always add a witness disk and then let the quorum decide if the disk should have a vote or not.

The same dynamic also adjusts for failures in nodes or the witness disk so that a cluster can withstand a lot more abuse now. When talking to Ben Armstrong about a customer that might scale up to roughly 30 hosts he mentioned that one subject that Microsoft needs to communicate better is the fact that large clusters aren’t a problem, especially not in 2012 R2.

And that wraps up my TechEd Europe 2013 notes. I probably missed something or talked about something more than once but so be it. If you should have any questions, feel free to ask away in the comments and I’ll do my best to answer them.

Thanks for reading.

TechEd Europe 2013 – Hyper-V   Leave a comment


I just counted and I have over 11 pages of handwritten notes1 from the sessions I went to so it’ll take me some time to compile them all into something coherent. This, in addition to the fact that my current theme of the blog doesn’t lend itself very well to long blog posts (though I normally try to go for quality over quantity), means that I’ll chunk the posts into a number of categories; Hyper-V, Networking and Storage as these are the main areas I focused on. Most likely I’ll end up with a “catch-all” post as well.

As the title of the post implies, let’s start with Hyper-V. Now, I know that there are numerous blog posts that cover what I’m about to cover but I’m summarizing the event for both colleagues and customers that weren’t able to attend so bear with me.

Hyper-V Replica

In 2012 R2 Hyper-V Replica has support for a third step in the replication process. The official name is Extended Hyper-V Replica and according to Ben Armstrong it was mainly service providers who asked for this to be implemented although I can see a number of my customers benefitting from this as well.

In order to manage large environments that implement Hyper-V Replica Microsoft developed Hyper-V Replica Manager (HRM), an Azure service that connects to your VMM servers and then provides Disaster Recovery protection through Hyper-V Replica on a VMM cloud level.

This requires a small agent to be installed on all VMM servers that are to be managed by the service. The VMM servers then configure the hosts, including adding the Replica functionality if needed (even the Replica Broker on clusters). After adding this agent you can add DR functionality in VM Templates in VMM.

Using HRM you can easily configure your DR protection and also orchestrate a failover including, among other things, the order you start-up your VMs and manual steps if needed. The service can be managed by your smart phone and there are no plans to allow organizations to deploy HRM internally.

Only the metadata used to coordinate the protection is ever communicated to the cloud, using certificates. All actual replication of VMs are strictly site to site, between your data centers.

If you manage to screw up your Hyper-V Replica configuration on the host level you need to manually sync with HRM to restore the settings. At least for now, R2 isn’t released yet so who knows what’ll change until then.

Finally; you are now allowed a bit more freedom when it comes to replication intervals; 30 seconds, 5 minutes and 15 minutes. Since there hasn’t been enough time to test, Microsoft won’t allow arbitrary replication intervals.

Linux

Linux guests now enjoy the same Dynamic Memory as Windows guests. Linux is now backed up using a file system freeze that gives a VSS alike functionality. Finally; the video driver when using VM connect to a Linux guest is new and vastly better than the old one.

I never got any information on what distros that’d be supported but my guess is that all guests with the “R2” integration components should be good to go.

Live Migration

One big thing in 2012 R2 is that Live Migration has seen some major performance improvements. By using compression (leveraging spare host CPU cycles) or SMB/RDMA Microsoft have seen consistent performance improvements of at least 40%, in some cases 200 to 300%.

The general rule of thumb is to activate compression for networks up to 10 Gbit and use SMB/RDMA on anything faster.

Speaking of Live Migration I’d like to mention a chat I had with Ben Armstrong about Live Storage Migration performance as I have a customer who sees issues with this. When doing a LSM of a VM you should expect 90% of the performance you get when doing a unbuffered copy to/from the same storage your VMs reside on. I might do a separate post on this just to elaborate.

Storage

In 2012 R2 you can now set Quality of Service for IOPS on a per VM level. Why no QoS for bandwidth? R2 isn’t finished so there’s still a possibility that it might show up.

One big feature, that should have been in 2012 RTM if you ask me (and numerous others), is that you can now expand a VHDX when the VM is online.

Another big feature (properly big this time) is guest clustering through a shared VHDX, in effect acting as virtual SAS storage inside your VM (using the latest, R2, integration services in your VM). More on this in my storage post though.

One more highly anticipated feature is active de-duplication of VHD/VHDX files when used in a VDI scenario. Why only VDI? Because Microsoft haven’t done enough testing. Feel free to de-dupe any VHDX you like but if things break and it’s not a VDI deployment, don’t call Microsoft. More on this in my storage post as well.

The rest

Ben Armstrong opened his session with the reflection that almost no customer is using all the features that Hyper-V 2012 offers. To me, that says a lot about the rich feature set of Hyper-V, a feature set that only gets richer in 2012 R2.

One really neat feature that shows how beneficial it is to own the entire ecosystem the way that Microsoft does is that all Windows Server 2012 R2 guests will automatically be activated if the host is running an activated data center edition of Windows Server 2012 R2. There are no plans to port this functionality to older OS, neither for guests nor for hosts.

In R2 you now get the full RDP experience, including USB redirection and audio/video, over SMBus. This means that you can connect to a VM and copy files, text, etc. even if you do not have network connectivity to the VM.

2012 R2 supports generation 2 VMs. Essentially a much slicker, simpler and more secure VM that has a couple of quirks. One being that only Windows Server 2012/2012 R2 and Windows 8/8.1 are supported as guest OSes as of now.

The seamless migration from 2012 to 2012 R2 is simply a Live Migration. The only other option is the Copy Cluster Roles wizard (new name in R2) which incurs downtime.

You can export or clone a VM while it’s running in 2012 R2.

Ben was asked the question if there’d ever be the option to hot-add a vCPU to a VM and the reply was that there really is no need for that feature. The reason being that Hyper-V has a very small penalty for having multiple vCPUs assigned to a VM. This is different from the best practices given by VMware where additional vCPUs does incur a penalty if they’re not used. The take-away is that you should alwasy deploy Hyper-V VMs with more than one vCPU.

During the “Meet the Experts” session I had the chance to sit down with Ben Armstrong and wax philosophically about Hyper-V. When I asked him about the continued innovation of Hyper-V he said that there’s another decades worth of features to implement. Makes me feel all warm inside.

Conclusion

As there haven’t been a lot of time between Windows Server 2012 and the upcoming release of 2012 R2 (roughly a year) the new or improved features might not be all that impressive when it comes to Hyper-V but I honestly think that they’re very useful and I already have a number of customer scenarios in mind where they’ll be of great use.

Not to mention that Hyper-V is only a small slice of the pie of new features in R2. I’ll be writing about some of the other slices tomorrow.

Most importantly though, and this was stressed by a fair number of speakers, is that Windows Server and System Center finally, for the first time ever, have a properly synchronized release schedule, allowing Microsoft to deliver on the promise of a fully functional solution for Private Clouds.

I agree with this notion as it’s been quite frustrating have to wait for System Center to play catch up with the OS, or vice versa.

With that, I bid you adieu. See you tomorrow.

1. Yeah, I prefer taking notes by hand as it allows me to be a lot more flexible, not to mention that I’m faster this way. I took advantage of the offer at this TechEd and bought both the Surface RT and Pro though so who knows, the next time I might use one of those devices.

Some thoughts on cost in the Private Cloud   Leave a comment


The Microsoft Reference Architecture for Private Cloud lists – among a a lot of other very useful and interesting things – a couple of examples of business drivers related to the agility (previously known as time), cost and quality axes:

Agility

  • Reduce Time to Market:Implement new business solutions more quickly so revenue comes in faster.
  • Better Enable the Solution Development Life Cycle:Speed up business solutions through better facilitation for development and testing and overall faster paths to production.
  • Respond to Business Change: New requirements of existing business solutions are met more quickly.

Cost

  • Reduce Operational Costs:Lower daily operational costs for basic needs such as people, power, and space.
  • Reduce Capital Costs or Move to Annuity-Based Operational Costs:Reduced IT physical assets by using more pay-per-use services.
  • Transparency of IT Costs: Customers are more aware of what they get for their money.

Quality

  • Consistently Deliver to Better-Defined Service Levels:Leads to increased customer satisfaction.
  • Provide Better Continuity of Service:Minimize service interruptions.
  • Regulatory Compliance: Meeting or exceeding mandatory requirements, which may grow more complex with online services.

The cost examples reminded me of a discussion I hade with a colleague who insisted that the best way to make money off a customer when it comes to the private cloud (the customer being “the business”) is to charge for capacity even if the customer doesn’t use it.

This is in my opinion the total opposite of what the private cloud is about.

By being totally transparent with the amount of resources the customer is using they can in turn fine tune their demands and needs accordingly.

If we up front charge the customer for 32 Gb RAM, 500 Gb of disk and 4 vCPUs even though they use only a fraction of it then there is no real way of knowing what IT actually costs the business.

It might also prevent them from requesting the VM to begin with, instead perhaps re-using an existing VM or finding another – often unsupported and trouble prone – solution.

This means that you should always charge the customer per Mb of RAM, Gb of disk and MHz of vCPU utilized. This is one aspect of the measured service characteristic of NISTs private cloud definition.

Make no mistake, you should still make a profit on Mb, Gb and MHz of course but the business should be able to see exactly how much their VMs cost them at any given time.

The Private Cloud Principles, Patterns and Concepts documentation also has a section about this.

One very interesting point that documentation makes is that by providing the business with on-going reports on how much their servers actually cost per month there’s (hopefully) an incentive to actually phase out old systems and services in order to reduce cost.

Transparency and honesty is always the best way to create a reliable long-term relation with your customers, and especially so when it comes to costs.

Three MSPC Fast Track guides released   Leave a comment


Three Microsoft Private Cloud Fast Track guides have been released.

The Microsoft Private Cloud Fast Track program is:

… a joint effort between Microsoft and its hardware partners. The goal of the program is to help organizations decrease the time, complexity, and risk of implementing private clouds. The program provides:

  • Reference implementation guidance: Lab-tested and validated guidance for implementing multiple Microsoft products and technologies with hardware that meets specific, minimum, hardware vendor-agnostic requirements. Customers can use this guidance to implement a private cloud solution with hardware they already own, or that they purchase.
  • Reference implementations: Microsoft hardware partners define physical architectures with computing, network, storage, and value-added software components that meet (or exceed) the minimum hardware requirements defined in the reference implementation guidance. Each implementation is then validated with Microsoft and made available for purchase to customers. Further details can be found by reading the information at Private Cloud How To Buy.

These guides detail the architecture of a Fast Track solution, the operations you perform on a daily basis in your cloud as well as how to actually deploy a private cloud according to the Fast Track program. Together they comprise the Private Cloud Fast Track Reference Implementation Guidance Set.

They’re quite hefty so it’ll take me some time to go through them but on the other hand they’ll make for excellent summer reading.

 

Posted 25 July, 2012 by martinnr5 in Documentation, The cloud

Tagged with , , , ,

Trying to find my voice   Leave a comment


When I started this blog I was a bit unsure about where I wanted it to go and what I wanted to do with it, as can be seen by reading my very small archive of posts.

After a long hiatus I’ve decided to give it another try, this time focusing on how organisations should relate to the Microsoft Private Cloud (MSPC) when it comes to other parts than the Infrastructure Layer, where a majority of blogs already provide great information. I’ll instead be looking at the Service Delivery, Service Operations and Management Layers of MSPC.

I’ll also try and tackle other issues related to this side of the cloud. For instance, how an organisation should approach the cloud in order to make the most sense of it, what the immediate gains are, how to sell the cloud to both end user as well as technicians, and so on and so forth.

The brush strokes will be pretty wide and details relatively scarce, at least at first, but hopefully I’ll be able to get more specific as I progress. At the moment there aren’t a grand plan behind these posts but as I keep posting I hope we all can try and make some sense of them, together.

I was thinking about changing the name of the blog but in the end it doesn’t matter what the name of the blog is as long as there’s quality content on a regular basis.

So, with that said, let’s see if I can manage to keep on updating for a bit longer this time, shall we?

Posted 23 July, 2012 by martinnr5 in Meta, The cloud

Tagged with , ,

Windows Azure fastest cloud provider   Leave a comment


Ars Technica reports that an independent vendor has benchmarked numerous cloud providers over the past 12 months and found that Windows Azure is the fastest among them:

The Windows Azure data center in Chicago completed the test in an average time of 6,072 milliseconds (a little over six seconds), compared to 6.45 seconds for second-place Google App Engine. Both improved steadily throughout the year, with Azure dipping to 5.52 seconds in July and Google to 5.97 seconds.

The uptime of the Azure cloud is also very respectable with the Chicago location managing a 99.93 percent uptime over the past month.

Posted 7 October, 2011 by martinnr5 in Elsewhere, The cloud

Tagged with ,

%d bloggers like this: