The great mainframe utilisation debate

Are you getting the most out of yours?

Someone recently asked me why mainframes are not utilised more broadly in large organisations that have them as part of their IT infrastructure. The point of the question was not so much to do with whether the capacity in place is fully used, more that people routinely seem to put workloads that are very well suited to the mainframe onto architectures that are more costly and complex to install and run.

In terms of high level context for this debate, many IT professionals we speak with tell us that x86 scale-out architectures (including what many call ‘private cloud’) are now very efficient compared to traditional discrete server deployments for dealing with mixed workloads. Those that are more mainframe savvy, however, suggest that the ‘big iron’ is still superior when it comes to the floor space occupied, the power consumed, and the admin resources required per unit of work done.

Of course this kind of generalised view doesn’t take into account the fact that there are very well engineered and run x86 environments, and (albeit less likely) very poorly implemented mainframes. Neither does it acknowledge that the nature of certain workloads and associated constraints often precludes the mainframe as an option for hosting them.

Nevertheless, we are continually coming across examples of organisations migrating workloads from the mainframe onto, say, a Unix or x86 based Oracle system, only to find that software licence and running costs soar, and managing the peaks and troughs of demand fluctuation becomes a challenge. Similarly, there are examples of organisations procuring a whole landscape to implement a new application on a Microsoft or Oracle platform when the same requirement could be fulfilled with lower cost, space, energy, cooling and operational implications in the existing mainframe environment.

So why is this? In our experience, there are three common reasons for IT professionals being reluctant to utilise the mainframe their organisation has in place for new and changing requirements.

The first is an out of date perception of the mainframe as being totally proprietary, despite the advances made over the last decade. Today’s mainframe, for example, supports the majority of open standards that are important from a software architecture and development perspective, allowing a good degree of portability and interoperability with other systems. This includes concepts such as Web services, Service Oriented Architecture (SOA) and the use of modern programming languages.

The second reason is a perception that the mainframe is expensive. This is a more interesting one. If you were to start with a green field site, the chances are that app for app in the early days your outlay would be greater if you went down the mainframe route. There comes a point, however, when the curves cross, partly down to the fact that the incremental cost of adding an extra unit of capacity once you are past a certain level drops off considerably compared to the x86 or Unix alternatives, and partly because the mainframe rules supreme when it comes to resource utilisation.

On this last point, it is worth remembering that the architecture underpinning the modern mainframe originates from a time when computing power was extremely scarce and expensive. Right from the outset, it was highly optimised to natively balance the use of resources efficiently in a mixed and fluctuating workload environment. And over the years, it has only got better at this, not just in terms of core processing, but also via the introduction of ‘offload engines’. These are basically ancillary processors specialised to deal with certain types of processing (e.g. Java runtime execution), which are seamlessly invoked when required.

Meanwhile, Unix, Windows and Linux environments are still subject to the principle that if you optimise them for one type of workload, they won’t run other workloads as efficiently. This is why rapid provisioning to achieve dynamic flexibility in a private cloud setup typically still involves moving complete stack images around (from an appropriately configured operating system upwards) when an application needs to be allocated more resource. While some might refer to the x86 ‘virtual mainframe’, there are still some fundamental differences that have implications in terms complexity and efficiency.

The third common reason for appropriate workloads not finding their way onto the mainframe is organisational prejudice and inertia. Unfortunately, as a function of history, many mainframe groups have become politically isolated over the years. This has often come about because they are perceived as being far too obsessed with matters security, resilience and operational integrity. A common complaint from architects and developers in the distributed computing world is that they are made to jump through hoops to get even the simplest of things done when they try to engage with the mainframe guys.

The irony is that many of those responsible for distributed systems are nowadays striving to emulate the traits they have previously criticised. As x86 architectures are increasingly used for business critical applications, for example, a lot more attention is having to be paid to security, resilience and so on. Nevertheless, when it comes to company politics, double standards are applied, and one man’s rigour is another man's uptight paranoia.

When we pull all of the above together, the upshot is that many large organisations are sitting on an asset that could do much more for them and help them meet some of their operational, environmental and cost reduction objectives, but they are not taking full advantage of it. Expecting busy rank-and-file IT practitioners to deal with the barriers we have discussed, however, is unrealistic.

With this in mind, there are two groups that are important: CIOs, who can evaluate the role of the mainframe in the big picture context and start to break down some of the barriers through a combination of policy, education, and the encouragement of collaboration between teams, and architects, who can apply a much more objective approach to evaluating practical requirements and making sure workloads end up on the right technical architecture for the right reasons.

If you are sceptical about the relevance of some of the things we have been discussing, it is worth considering that one of the reasons the mainframe is often not front of mind when it comes to future planning and new requirements is because it largely just sits there doing what it is supposed to do – i.e. handing complex and critical computing requirements - with minimal operational overhead and distraction. I’m sure a lot of people nurturing x86 infrastructures wish they could say the same about their environments.

By Dale Vile, MD at Freeform Dynamics.