I’d like to understand why companies feel that blade servers are worthwhile investments. I recognise the arguments for higher density computing within standard rack enclosures, and moving “profiles” between enclosures, etc. These arguments aren’t much different from those used to justify virtualisation – and for what it’s worth, I completely agree with those arguments.
Where I get puzzled with blades though is that I see it as being essentially virtualisation at the wrong layer – at a deeper hardware layer than VMware other hypervisor style virtualisation. Looking at say, ESX as an example, we virtualise on a host and then provide a very generic hardware profile to the guests, and the guests share the hardware resources available.
Blades are essentially the same – but rather the sharing to me seems subtly different. Or to be more precise, the sharing seems to be less isolated. A blade can cock up and cause issues with other blades or the blade chassis – NIC, FC. Within a virtualised environment, this is much more difficult, from my observation at least. I can’t remember the last time I saw a single VM encounter such a significant issue that it took out an entire ESX server in an uncontrolled way.
Have I seen blades and blade servers do that? Yes.
So here are my general questions:
(a) Are blade systems as “reliable” as virtualisation systems?
(b) Other than providing a higher compute density, do blade servers provide more functionality than virtualisation systems?
(c) For users of blade servers – would you consider them to be more or less reliable than server virtualisation?
Don’t get me wrong, I’m not dissing blade servers, and I remain open-minded about them. But I need to understand them a little better than I currently do, and I’m open to having experts cite some “killer” examples of why my concerns are unfounded.