Research: Virtualising mission-critical apps - part 2

John Leonard examines the technical and institutional hurdles to the virtualisation of core business systems

In part one of this research we looked at how, as a consequence of their fundamental importance to the enterprise, mission-critical applications have tended to remain stubbornly attached to bare metal.

Applications such as ERP, financial suites and HR systems represent a sticking point to consolidation because, to those responsible for them, the risk of failure or dropping off of performance outweighs any benefits that might accrue by virtualising and then consolidating them.

As well as cost, space and management savings, the advantages of virtualisation include the ability to integrate hetero­geneous enterprise data onto unified virtual infrastructure, allowing for high-performance interoperability and rapid data access. Such integration over high-speed connections, accompanied by low-latency memory paging and input/output (I/O) access, allows the rapid creation of high-performance analytics, business intelligence, data mining and compliance solutions.

So, a simplified, agile, high-performance infrastructure is the promise. However, worries over performance and scalability are the main technical barriers to consolidating enterprise applications, according to a Computing survey of 150 IT decision makers (see figure 1).

[Click to enlarge]

Performance problems

The premise of the virtual environment is that a hypervisor assumes control of the physical machine, allowing multiple operating systems to be run on a single physical infrastructure. Memory paging, network communications and storage I/O access are controlled by the hypervisor, but appear as virtually native to each guest operating system (OS).

Early software-based hypervisors had to cope with a previous generation of CPU chips without virtualisation support, meaning that precious processing power was spent in fooling the OS that it was running on hardware rather than a virtual machine (VM). Processor-hungry applications often ran slowly or unreliably on these early platforms, leading many IT directors to conclude that VMs were completely incompatible with enterprise applications.

The reticence to virtualise mission-critical applications may be, in part, a hangover from that era. Times have changed. Modern Type 1 hypervisors (see below) take full advantage of the virtualisation technologies that chip makers such as Intel and AMD have built into their latest processors, minimising the overhead on processing power.

Nevertheless, there will always be some difference in performance between a dedicated box and a virtualised platform, however slight, and the few specialised applications for which millisecond latency is a major issue, such as trading systems, may always be better on specialised hardware.

For the remainder, though, there a signs that things are changing. The Computing survey found that even applications such as financial suites have been virtualised - at least partially - in about half of the organisations questioned.

Is it safe?

Security of virtualised platforms is also a hurdle for many. Again the technical fears are largely historical.

In early systems, there was concern that with software-based (Type 2) virtualisation approaches, the machine kernel could be breached through a weak spot in one guest OS. With hardware-based virtualisation and self-standing Type 1 hypervisors, that route is closed off.

Moreover, modern hardware-enhanced virtual servers have additional security features designed for large-scale virtualised environments.

Research: Virtualising mission-critical apps - part 2

John Leonard examines the technical and institutional hurdles to the virtualisation of core business systems

The bigger security issue is one of governance. It is easy to create or remove virtual servers and slack access control practices can lead to unacceptable risks. In many cases security products designed for physical systems don’t work in virtual systems, or might be a drag on performance.
A VM is not inherently less secure than a physical machine, but it does introduce add­itional processes that firms must consider.

Licensing issues

Due to the performance limitations inherent in software-based virtualisation approaches, some vendors took the early approach that they would not license or support their software on virtual platforms. With advances in virtualisation technologies, and the emergence of Type 1 hypervisors, these vendors have modified their approaches.

However, conflicts remain over how virtualisation and enterprise software vendors license their products. Some software vendors, such as Oracle, operate a per-core licensing policy that counts individual processor cores on the underlying hardware. An Oracle database running on VMware (which counts individual CPUs), for example, requires a license for each core whether the VM uses them or not. Meanwhile, Microsoft regards each VM as a physical server with the same number of sockets as the underlying hardware. Since one physical server can host multiple VMs, users of Hyper-V who do not also use Windows Server (where concessions are granted for multiple VMs) will, in reality, need to be looking for an alternative hypervisor.

Institutional barriers

The main institutional constraint on our respondents migrating their enterprise applications to virtual environments is lack of resources (see figure 2). In the present business climate, this is scarcely surprising. Budget is going to be the major issue for any IT initiative, whether successful or otherwise.

[Click to enlarge]

Second on the list, fear of disruption to end-users, is a sensible concern with any major IT programme, especially so if the project is mission-critical.

What is clear is that, for the survey sample at least, “departmental resistance”, which often figures highly in explanations for organisational resistance to enterprise application virtualisation, is far less of a factor, with only 25 per cent seeing it as a major issue. Still a high number, and to be linked also to the balance of power results, but far less of an issue than resources.

Tale of two hypervisors

Type 1
The original virtualisation platforms on early x86 platforms employed bare metal or Type 1 hypervisors, forming a layer between the hardware and guest operating systems with privileged access to the CPU kernel at so called "ring 0" level.

With chipsets now designed with virtualisation in mind, Type 1 hypervisors can now have even more priveliged access to the CPU (ring -1 level), managing many-low level functions directly allowing a guest operating system (OS) to control the ring 0 operations in isolation from the other guest OSs and the hypervisor.

Modern free-standing Type 1 hypervisors, such as VMware ESX, are now tightly integrated with the supporting hardware, avoiding issues of hardware contention and greatly reducing performance issues that plagued earlier versions.

Type 2
Type 2 hypervisors, such as Oracle VM VirtualBox, use a software-based virtualisation approach, creating virtual environments for guest OSs on top of a native OS kernel. This has the advantage that it can run on any modern x86-based architecture, given enough memory and speed.

However, it requires certain low-level software work-arounds to emulate a native x86 environment.

An alternative Type 2 approach is to “paravirtualise” in effect creating operating system hybrids for the virtual environment.

The drawback with both of these approaches is that they can imply performance degradation and potential loss of stability and functionality for some applications.