The software-defined data centre explained
The software-defined data centre is an extension of current data centre trends, but vendor squabbles have confused the market
Ahead of our Data Centre & Infrastructure Summit later this month, Computing has been conducting a programme of research into the future of the data centre. One term that keeps coming up in relation to this topic is the software-defined data centre (SDDC), but during our research it became apparent that very few of the senior IT people that we spoke to had much idea about what the SDDC is, beyond vague notions of "more virtualisation". More to the point, we weren't entirely sure ourselves! So, before presenting the research findings - as we will over the coming weeks - we thought it might be an idea to get the concept nailed down.
What does it all mean?
It's easier to talk about the ideas behind SDDC than it is to define what it actually is. This is largely because ever since VMware brought the prefix "software-defined" to popular attention with its billion-dollar purchase of networking firm Nicira in 2012 networking and storage vendors have rushed to attach the label to existing product ranges, with a considerable muddying of the waters.
The overarching vision of SDDC (and it is still very much a vision at this stage) is that control of the entire data centre is abstracted from the underlying hardware, with all physical and virtual resources made visible via software programs so that the provision of these resources may be done automatically or by programming according to the immediate needs of the applications and services running in the data centre. Thus it goes beyond virtualisation to cover physical resources as well.
This vision encompasses the three main hardware categories of the data centre - compute, network and storage - and is a logical extension of trends and that have been in motion since servers were first virtualised, and indeed before that. These include:
• The increasing use of software to deliver features
• Automation
• Commoditisation and standardisation of hardware and components
• Arrival of open source and particularly open APIs
• Scale-out infrastructure and distributed computing
• Cloud computing and IT as a service (ITaaS)
• Multi-tenancy - the ability for multiple applications to share the same hardware platform.
We can add to this list the perennial goals to cut costs, improve efficiency and availability, and to respond more rapidly to changing business requirements and conditions.
Rather than being a new technology or solution, SDDC is a way of looking at data centre design to see just how much of the control functionality can be taken on by intelligent software, leaving the lumpen machinery below to do what it does best - move and process data.
The compute part of the picture - server virtualisation and perhaps the use of commodity hardware to form distributed clusters - is well understood and widely implemented so we will not discuss that element here. Instead we will move on to look at the storage and networking components of the SDDC.
Software-defined storage (SDS)
Why would anyone want to start messing around with storage? After all storage infrastructure generally works pretty well. The technologies are mature and reliable. Moore's Law ensures capacities keep up with increases in demand, while data tiering, deduplication, compression and thin provisioning have all increased efficiency and performance.
Proponents of SDS would agree with most of this, but they would point out that for many organisations the growth in data storage needs is already pushing at the limits of Moore's Law and that the burden that will be placed on storage by recent developments such as the Internet of Things (IoT), hybrid cloud, mobile connectivity, big data, ITaaS, rich media and all the rest mean that a rethink is required.
Plus, they add, the costs of managing ever more complex systems don't go down, and if something can be automated it probably should be for this reason alone.
Software-based controls of storage infrastructure have been growing in importance for years, nevertheless NASs, SANs, DAS, Raid arrays, DR systems, replication, back-up and flash remain optimised for specific use cases and types and categories of data. Moreover, they tend to feature proprietary vendor-specific application-specific integrated circuits (Asics), controllers, firmware and operating systems. This inevitably leads to the creation of silos both in terms of data and operational specialisms, and thus a great deal of manual configuration to get all these discrete systems to talk nicely to each other.
This is not an ideal environment in which to introduce automated processes. It also means that most applications and services plugging into the various storage appliances need to be individually configured as to what to use and when, adding considerably to deployment times and maintenance needs.
Scalability is limited too. For various operational reasons you cannot just go bolting on more SANs as your needs increase. Four or five in and you will hit a wall.
However, things are changing; distinctions are blurring. Mirroring what happened with servers a few years back, storage infrastructure is becoming commodified. So, instead of running on proprietary chipsets and circuitry, modern storage devices are starting to use standard components and operating systems, meaning that the differences between the various vendors and storage types are much less pronounced than they once were.
Compute and storage technology is moving closer together too, with flash-equipped servers increasingly called upon to perform storage duties; unlike SANs, servers really can be bolted together to form distributed clusters of almost limitless scale.
With the various components of storage infrastructure featuring the same core components and architecture, with open or industry standard APIs becoming the norm and with generalist operating systems and virtualisation platforms being used to run them, it's not hard to see how an intelligent software platform could be used to control and ultimately automate the provision of storage facilities to each application or service dynamically according to demand.
This makes multi-tenancy provisioning a viable proposition, with the resources available to individual tenants or applications subject to fine-grained controls, and spare capacity allocated where it is needed. Not surprisingly, therefore, cloud services providers have been early adopters of the software-defined approach.
This, then, is the vision described by SDS: controlling the provision of storage resources dynamically via software according to the policies, restrictions and requirements of each application or service demanding it, thus to increase the efficiency and agility of the data centre as a whole by breaking down the silos and automating as much as possible.
This vision does not necessarily demand virtualisation, although most likely this will feature. On the hardware side it may encompass storage appliances and arrays as well as commodity hardware, just so long as there are APIs to provide external programs with the necessary hook.
Software-defined networking (SDN)
As with SDS, SDN is all about taking the control functionality that is traditionally embedded in infrastructure - in this case network switches - away from the hardware and implementing it with software, thus enabling the automation of network configurations. This, its proponents say, makes for a more intelligent network better able to cope with fast-changing demands placed upon it by cloud services and large enterprise applications.
"It's network administration and management, the velocity of new services in a multi-tenant environment, being able to virtualise the network and create tenancy, being able to more effectively monitor and analyse the traffic within the network, and achieve full network visibility - all in a fully programmatic way," Dave Ginsburg of vendor Pluribus told Computing recently.
Once again, for SDN to enable this kind of control in a multi-vendor environment, hardware vendors must provide APIs and open protocols to allow their devices to be placed under the control of external software systems.
This has been a sticking point, with some vendors, notably Cisco holding out against proposed open standards such as the widely adopted OpenFlow protocol, instead preferring their own proprietary standards.
With each vendor going its own way, it is no surprise that CIOs are struggling to get their heads around what SDN really means. Once again, with a clear use case before them, it is the cloud service providers who are leading the charge and no doubt more will follow once things become clearer. For now, however, wait-and-see is the order of the day.
The SDDC makes a lot of sense in what it is trying to achieve by extending existing data centre trends to their logical conclusions, but the proof of the pudding is in the eating.
Just as server virtualisation brought with it a number unforeseen and unwelcome consequences around licensing, backup and security, so SDN and SDC will also bring problems that need to be ironed out before mass adoption takes place. and more work is needed - not least the emergence of accepted standards and best practices.
Then there are the inevitable worries about the costs of moving away from the current architectures, and how the new software-defined systems will sit with the legacy infrastructure. And last but not least what all this means for IT workers' jobs.
These are all issues that require answers from the industry. Until then, SDDC will remain an interesting idea, a beautiful vision to strive towards.
@_JohnLeonard
• Computing's Data Centre & Infrastructure Summit takes place in London on 24 September for more information see www.computingsummit.com/datacentre