On the drive to efficiency, it's important to look in the rear-view mirror
Quocirca went to VMWorld Europe this week, as a guest of virtualisation vendor VMware. In an effort to avoid hoisting a size 15 carbon footprint into the stratosphere, the journey was undertaken on Eurostar and TGV. It’s extremely difficult to accurately account for whether the rail journey was actually greener than a flight would have been, but the 3:30am departure from home to get to St. Pancras felt awfully virtuous. Some eleven hours later and the TGV was just pulling out of Toulon with still an hour to go to Cannes. As the assigned seat faced backwards, every moment afforded plenty of opportunity to examine where we'd just been; a look backwards even as we moved ahead.
We should do the same with virtualisation – the promised land is only there if we bear in mind the issues we have had to face in what we have gone through beforehand.
Virtualisation is touted as mechanism to allow data centres to exploit the latent processing power already deployed on the floor. This is the efficiency story that justifies the technology, based upon an avoidance of purchasing hardware that you might otherwise have needed. The logic being that you already have this great pool of processing power that you're failing to exploit, so you may as well load up the servers with additional application workload. Thus, perhaps sidestepping the twin problems of power constraint and a shortfall of floorspace for new servers.
Of course, the typical data centre is chock full of servers that have spare capacity simply because we don't trust the operating systems we've been sold to adequately isolate the applications that we might otherwise have loaded onto them in the first place. Enterprise software architects have, in the past, avoided placing too many applications on one machine due to stability and security concerns, with the expectation that a crash in one will take down the entire operating system image, cascading the effects of any single problem. Architecturally, workloads have also been separated for performance reasons, and there is some evidence to suggest that this may also be helped by virtualisation.
VMware reports higher scalability and throughput for applications running on virtualised environments, though it is fair to say that virtualisation is more likely to initially be adopted to solve capacity concerns rather than being the first approach to achieve performance improvements.
Server virtualisation brings with it the promise of being able to completely isolate the various running images. In theory at least, a catastrophic failure in one image will cause no problems at all in any other image sharing the same hardware. It is prudent however to assume that various images are not, in fact, literally impervious to the failures and compromises of another.
Serious research continues to be done, for example, concerning novel security attacks against the hypervisor itself and from there into the virtualised environments and images themselves. Such research has already found flaws in the Windows OS the hardware instruction sets supporting virtualisation, as well as the operations of the software based control layer. While currently highly technically complex, such attacks are potentially devastating due to their almost complete stealthiness. Expect further bad news on this front in the future.
Data centres also find themselves with spare processor capacity due to the need to cater for peak load capacity, combined with the simple fact that distributed computing....well it distributes. The processing power that one application might need at peak load might well be available in another server, but it might just as well be on the moon for all the good that it's doing. A well designed hypervisor apportions physical resources to each running image according to a predetermined set of control criteria: the priority assigned to an image, the time of day or another calendar function and so on.
Keep in mind that virtualisation's balancing act on workload demands is still ultimately hardware constrained however, and if several images are hungry for resources at the same time, bottlenecks will soon emerge. But, plan ahead and allow for cyclical and other peak loading, and you’ll still be able to drive up overall utilisation rates appreciably.
Assuming we trust the virtualisation technique to sufficiently isolate all the images, it is tempting to tackle the problem of hardware constraint once and for all by avoiding the splintering of the resource pool in the first place. Lump everything together on one large machine and let the hypervisor deal out the resources according to the controlling performance criteria.
It is worth considering whether the growing acceptance of virtualisation for the Windows/Unix/Linux market might usher in a new demand for mainframe class systems, especially with the parallel emergence of cloud computing models. Assuming that you're happy to consume application services that are hosted elsewhere then why should you care what the actual machine is that you're running on? Perhaps what you think is a Windows machine is actually a new generation "big iron" box. Virtualisation after all, is something that IBM was doing on 3090 hardware in the 1980s and at least one of Quocirca's analysts recalls virtualisation being available under CP-40 in 1967.
So the road ahead is littered with the sins and successes of the past. Much of what virtualisation seeks to solve are the unintended consequences of the architectural choices of the last 20 years. At least a few of the claimed benefits are a return to the future, only without the smoking DeLorean and the lovable, crazy professor. Cannes fast approaches, and with it comes the promise of a virtually spotless new world.
Simon Perry, principal associate analyst, Quocirca