Archive for the ‘Opinion’ Category

TechEd 2013 Europe – An interlude   Leave a comment


Before I get into my post about virtual networking I realized that I need to clarify one particular piece of information that might not be clear to those that aren’t closely following the Microsoft information stream.

I’ve been pretty flippant about the possibility to online re-size a VHDX in a VM, mostly just stating that it’s about time this got added. What I, and a lot of others, are forgetting (or perhaps in some cases neglecting) to mention is that this is for a VHDX (only) that is attached to a SCSI controller (only).

The requirement for VHDX is no biggie, you should be using VHDX anyhow, but the requirement for a SCSI controller is a big one. Why? Because you can’t boot a Hyper-V VM off of a SCSI controller which means that you still can’t on-line increase the size of your boot disks.

With this said I’m immediately going to do a 180 and mention that in the new Generation 2 VMs you can boot off of a SCSI disk. This generation only supports Windows 8/8.1 and Windows Server 2012/2012 R2 though, limiting your options quite a bit.

Sure, you should be deploying WS 2012 anyhow but the matter of the fact is that a lot of companies haven’t even moved on to 2008 R2 yet. Some of my customers are still killing off their old Windows 2000 servers.

As an aside I’d like to point out that if you have a working private cloud infrastructure then you shouldn’t have to re-size your boot disk, ever. Just make it, say, 200 Gb, set it to dynamic and make sure that you’re monitoring your storage as well as your VMs.

The post about virtual networking will hopefully be up later today but no promises as I’m catching a flight back to Sweden later.

TechEd Europe 2013, Prologue   Leave a comment


So, TechEd Europe 2013. My first techEd ever I might add so my recap of the week is not going to be similar to what other blogs might post. I’m pretty sure it’d be different even if this would’ve been my 10th TechEd, but I digress.

Tomorrow I’ll try to go into a little more detail of the sessions I’ve attended and the talks I’ve had with people as I right now just want to go grab a bite to eat and after that take a bath, finish the book I’m reading and then an early night (far too many late nights, as it were).

Going into TEE 2013 I wasn’t sure what to expect and right now, immediately after it’s over, I’m having mixed feelings about the event. Without a doubt these will coalesce into something else over time, perhaps by tomorrow already, so take them with a grain of salt.

The event itself was very well-organized, as should be expected, and IFEMA is a great conference hall (although their toilets make an awful racket when being flushed – seriously, prolonged exposure will give you impaired hearing). While on the subject of toilets, it was very disheartening as well as disgusting to see delegates leave a toilet stall without washing their hands.

Listen, I don’t care what you do at home but in public I expect you to behave as an adult.

The sessions are what gives me mixed feelings as most of them were quite good, some were great and some not all that impressive. I guess I was hoping for the level 300 and 400 sessions to be really technical but it turns out that I got the most useful and interesting information when talking to Microsoft, their partners and other delegates.

Talking to Microsoft, and especially to the “rock stars” like Ben Armstrong, José Barreto, Mark Russinovich, Jeffrey Woolsey and their ilks, wasn’t all that easy though as a lot of delegates didn’t care that there were others besides them that wanted to get a chance to ask questions and instead kept flapping their gums forever.

I understand why this happens, I really do, and I was probably guilty of it myself when chatting with Ben Armstrong during the final minutes of Ask the Experts but to my defense the guy who wanted to chime in was sitting behind me, quiet as a mouse.

Still, annoying as hell having to rush off to the next session having waited for the guy in front of me to stop yapping to no avail.

Would I recommend TEE to my colleagues and customers and others living in Europe? Absolutely. I know the “big one” is in North America but there are a number of pros with going to TEE instead.

First of all; it’s a much shorter (not to mention cheaper) trip. You can spend that time doing something a lot more inspiring than sitting on a plane. Second; all the big names are at TEE as well. There were a couple of speakers that were missing, perhaps because Build was this week as well, but as I mentioned above – all the headlining names were present and accounted for.

Last, but not least; as TechEd North America is held before TEE all the kinks in the keynotes, sessions, hands-on labs, demos, etc. have a chance to be worked out in time for TEE. Because of the scheduling you also have the chance to look through the comments and feedback to the material posted on Channel 9 and even check out some of the recorded sessions if you want to make sure that you attend the right ones.

Ok, enough of this – I’m off to find me some tapas and a beer or two. I’ll be spending the weekend i Madrid but it’d be a shame to let this lovely weather go to waste.

Hasta luego!

Posted 28 June, 2013 by martinnr5 in Elsewhere, Opinion

Tagged with ,

Some thoughts on cost in the Private Cloud   Leave a comment


The Microsoft Reference Architecture for Private Cloud lists – among a a lot of other very useful and interesting things – a couple of examples of business drivers related to the agility (previously known as time), cost and quality axes:

Agility

  • Reduce Time to Market:Implement new business solutions more quickly so revenue comes in faster.
  • Better Enable the Solution Development Life Cycle:Speed up business solutions through better facilitation for development and testing and overall faster paths to production.
  • Respond to Business Change: New requirements of existing business solutions are met more quickly.

Cost

  • Reduce Operational Costs:Lower daily operational costs for basic needs such as people, power, and space.
  • Reduce Capital Costs or Move to Annuity-Based Operational Costs:Reduced IT physical assets by using more pay-per-use services.
  • Transparency of IT Costs: Customers are more aware of what they get for their money.

Quality

  • Consistently Deliver to Better-Defined Service Levels:Leads to increased customer satisfaction.
  • Provide Better Continuity of Service:Minimize service interruptions.
  • Regulatory Compliance: Meeting or exceeding mandatory requirements, which may grow more complex with online services.

The cost examples reminded me of a discussion I hade with a colleague who insisted that the best way to make money off a customer when it comes to the private cloud (the customer being “the business”) is to charge for capacity even if the customer doesn’t use it.

This is in my opinion the total opposite of what the private cloud is about.

By being totally transparent with the amount of resources the customer is using they can in turn fine tune their demands and needs accordingly.

If we up front charge the customer for 32 Gb RAM, 500 Gb of disk and 4 vCPUs even though they use only a fraction of it then there is no real way of knowing what IT actually costs the business.

It might also prevent them from requesting the VM to begin with, instead perhaps re-using an existing VM or finding another – often unsupported and trouble prone – solution.

This means that you should always charge the customer per Mb of RAM, Gb of disk and MHz of vCPU utilized. This is one aspect of the measured service characteristic of NISTs private cloud definition.

Make no mistake, you should still make a profit on Mb, Gb and MHz of course but the business should be able to see exactly how much their VMs cost them at any given time.

The Private Cloud Principles, Patterns and Concepts documentation also has a section about this.

One very interesting point that documentation makes is that by providing the business with on-going reports on how much their servers actually cost per month there’s (hopefully) an incentive to actually phase out old systems and services in order to reduce cost.

Transparency and honesty is always the best way to create a reliable long-term relation with your customers, and especially so when it comes to costs.

Why System Center Orchestrator?   1 comment


Why System Center Orchestrator?

A customer of mine recently asked me why he needed System Center Orchestrator (I believe we’re abbreviating it SCORCH, although I think SCOR is a better homonym (and why is abbreviate a long word?)).

In reality he was most likely asking me why he needed automation as he pointed out that if they really needed to automate anything they’d done it 4 years ago with a bunch of PowerShell scripts and a custom made web application. Instead of opening up that can of worms (I’ll gladly do that later though) I’m going to compare SCORCH to his solution.

Based on my own rather limited knowledge of SCORCH and the very in-depth knowledge of one of my colleuages I’d say that there are a number of reasons why you should use SCORCH over a home made automation engine.

It’s Microsoft

Yeah, I know a lot of you think this should be on the minus side of the equation but to be honest, Microsoft has gotten their act together quite a bit over the years and we all know that they’re in this for the long run.

Microsoft have a ton of resources – be it marketing, developers, money or industry klout – so when they have their collective hive mind set on something, they make it happen (see the Xbox as a perfect example).

Another reason for this to be in the PRO camp is that Microsoft is responsible for the entire System Center suite, along with Hyper-V, which makes it a lot easier to get all the components to co-operate and even though SCORCH isn’t entirely in sync with the rest of the System Center products, it will be soon enough.

It hasn’t always been Microsoft

Thanks to SCORCH starting out as Opalis and then being bought by Microsoft there is a lot of functionality and connectors that Microsoft probably might not have put in there if they’d written this from scratch.

They are, as I mentioned above, getting their act together though and might just have gotten around to it sooner or later but in that case it’d be criminal of them not to focus on their own line-up of products first and then add support for the competitors products a lot further down the line.

It’s easy to get started

Easy being relative, of course, but if you compare it to writing everything from scratch SCORCH is a lot easier to get up and running thanks to the plethora of connectors and OOtB functionality you get.

Powerful and flexible yet simple

SCORCH is powerful and flexible when you need it to be and simple when you don’t. Even though you will be forced to write your own custom scripts from time to time the graphical interface of the editor in SCORCH is hard to beat when designing complex runbooks.

Self documenting

In a way at least. Having a graphical overview of your runbook and the scripts, linked together, makes it a lot easier to understand what the heck is going on compared to a folder with a bunch of PowerShell scripts – no matter how intelligently you name the scripts.

You’re not alone

There are thousands of users of SCORCH that share knowledge and ideas through multiple forums, not the least through Microsofts own channels. Microsoft even provides support themselves if you want to pay for it.

An off the shelf product

This is actually a very important point that ties in with the previous one. SCORCH being an off the shelf product means that any SCORCH consultant knows how it works. Sure, the actual functionality of the runbooks and custom scripts still needs to be figured out but he or she will hit the ground running with SCORCH.

Not so with the home built shack of scripts and web applications. On the contrary I’m betting that the consultant that’ll have to figure this solution out will break one or both legs when he or she hits the ground.

Future proof

SCORCH relies a lot on the same technologies as a home made solution does (PowerShell, IIS, etc) so from that point of view we can’t argue that the home made solution is any less future proof than SCORCH but I’d argue that SCORCH as a whole is more future proof.

Microsoft is going to evolve SCORCH and in the process they’ll make sure that as many dependencies surrounding it will be evolved as well or at least work with the latest version of SCORCH.

They will not care about your own home brewed concoction of scripts and .NET pages.

It’s included in System Center

There are of course organisations that don’t use System Center but the majority that do use it would likely benefit from automatization. With the new licensing model you get SCORCH “for free” when you buy System Center so price isn’t an issue either.

Are there any cons?

The only actual negative aspect of running SCORCH is that you need another server for this but as long as you adhere to the private cloud principles, adding another server shouldn’t task your IT department at all.

You already own a Datacenter license of Windows Server so if you virtualize your SCORCH server there are no additional cost for licensing either.

Conclusion

SCORCH is a very competent, flexible, well designed, interoperational, powerful and reliable automation engine that works exceptionally well with the rest of System Center.

The new System center licensing model gives you all System Center products for one price so most larger organisations probably already own SCORCH along with the other System Center applications.

Being a Microsoft product there are a ton of consultants that can help you out when needed as well as a large community and professional support options straight from Microsoft.

If you’re going to automate – and you are if you know what’s good for you – there’s a very good chance that SCORCH is the best solution for you.

Resources

Posted 24 July, 2012 by martinnr5 in Opinion, Tools

Tagged with , ,

vtCommander vs 5nine Manager for Hyper-V   1 comment


There’s been quite a lot of talk about 5nine’s Hyper-V Manager recently. In light of this I thought I’d put the spotlight on another tool I discovered a couple of weeks ago; vtCommander.

The problem is that they’re the same product.

Understandably confused I shot an e-mail to the 5nine support. Here’s the reply:

5nine Software has OEM-ed ( licensed) VT Commander Management components, while Hyper-V Monitoring was developed, and proprietary to 5nine Software ( VT Technology cross-licensed our Monitor ).

This from Hyper-V Management standpoint current versions of vtCommander and 5nine Manager are basically identical, and this is the reason you get ‘Registration Failed’ message. We control licensing to ensure that you do not get 2 instances of basically the same product on yoru environment.

The difference also is that 5nine has a ‘Free’ version of Hyper-V Manager, while vtCommander does not. Going forward 5nine Software will have another component that are about tobe released – such as Enhanced V2V, Anhanced Monitoring, and other.

If you are buying 5nine Manager -you will need to remove trial version of vtcommander, and install ‘Paid’ of’Free’ version of 5nine Manager, while registering under different e-mail.

At this time though – there is no need, as products are co-branded, unless you want to install Free version of the Manager, and for some reason prefer not to get ‘Full’ version of either product ( which have more features ).

With this in mind I can heartily endorse both the 5nine Manager and vtCommander as they/it saved my bacon during the installation of a rather tricky Hyper-V cluster.

Posted 12 May, 2011 by martinnr5 in Opinion

Tagged with , , ,

DPM 2010   Leave a comment


I’m at the last day of a two day DPM 2010 class. My main reason for taking this class was to learn more about how DPM can protect Hyper-V but also because we internally need a better way to backup our SQL-servers internally.

DPM 2007 wasn’t especially great but 2010 is a lot better and should be able to take care of an entire IT-environment at a small to medium business  (10-250 users here in Sweden) and especially Hyper-V.

Hopefully I’ll get some time to try out a couple of scenarios with DPM 2010 and Hyper-V as the combo shows a lot of potential.

At least according to me.

Posted 29 April, 2011 by martinnr5 in Opinion

Tagged with , ,

So, how about that Server Core, eh?   1 comment


If there is one thing that divides the Hyper-V community it has to be how much VMware sucks; “a lot” or “a whole lot”. Also; if a Server Core or a Full installation is the best way to go for your Hyper-V hosts.

In a majority of these blog posts I’ll sound as if I and I alone hold the unquestionable truth in any and all matters pertinent to Hyper-V but don’t let that stop you from questioning me in the comments and I’ll do my best to use simpler words and speak slowly.

Let me start by stating that Server Core can be quite useful, not just as a Hyper-V host. Let me follow up by explaining why I think Server Core is a bad choice for a Hyper-V host.

First of all, if you really need a slimmed down version of Windows to run Hyper-V on, why not go with Hyper-V Server 2008 R2 instead? It’s free, it scales up to 16 nodes and you manage it the same way you manage a Server Core installation. Yes, the Hyper-V Server allows for “only” 1 TB of RAM and only 8 CPU but that should be enough for most of your needs.

That you can’t use it for more than Hyper-V is a moot point as you really shouldn’t be running anything else but Hyper-V on a Hyper-V host anyway, at least not in production.

The only argument for a Server Core installation that holds water is that it requires a lot less updates and patches. Still not enough to warrant a Server Core though in my opinion. If you follow Microsofts best practice for securing a Hyper-V host and use common sense when managing it then it should be just as secure as the rest of your servers (and some of these both could and should be run on Server Core).

That you need less resources to run a Server Core installation compared to a Full installation is also not enough to convince me. You still need space on your system drive no matter what and that extra gig of RAM that you might save with a Server Core installation is not worth the hassle of managing a Server Core installation.

Because this is where my main beef with running Hyper-V on a Server Core installation lies; management.

When reading other posts on the subject you hear that you have a lot of tools at your disposal that alleviate the need of a GUI; the entire RSAT suite, sconfig, netsh and other included CLI-based tools as well as third-party tools such as CoreConfig.

When scrutinized, these tools don’t hold water though.

RSAT is extremely handy and I use it daily for managing DNS, AD, Group Policy, DHCP, and so on but on the whole it’s not all that useful for managing a Hyper-V host. Hyper-V Manager does a great job managing a working Hyper-V host and Failover Cluster Manager handles clustered hosts quite nicely. If the cluster is operational, that is.

Allowing for Server Manager to connect to a Server Core is a breeze (“winrm quickconfig”) but Server Manager is very limited to what it can modify on a remote server so you still need to remote desktop to the Hyper-V host and get your hands dirty.

Sconfig, netsh and so on are very useful and can do a whole lot but again mostly for a functional server.

CoreConfig is a great tool put together by some very talented people but why install a CLI-based server only to use a GUI developed by a third-party developer for managing the server? If you go with the Full installation – which you already paid for – you get the GUI as well, one developed by Microsoft no less.

Configuring a single server Hyper-V host based on Server Core won’t give you an ulcer but when you need to juggle four or more hosts in a cluster with multiple VLANs, iSCSI-based storage and a couple of other networks on top of this you quickly grow tired of the limitations of Server Core. Not to mention if you need to do some serious troubleshooting in this cluster (which I, by the way, have done for the past couple of weeks).

There’s just so much overhead when it comes to getting all the components to fit together when all you have to work with is a limited subset of the whole toolbox.

Please use Server Core where the security policy of your company dictates it or for Domain Controllers, DNS, DHCP and similar single purpose servers that require a minimum of work to manage or troubleshoot and where RSAT does a great job.

A Hyper-V host though can (and most often will) require complicated maintenance that’s made even more complicated by using the blunt tools that Server Core provides.

At least according to me.

Posted 29 April, 2011 by martinnr5 in Opinion

Tagged with , ,

%d bloggers like this: