For enterprise AI small is beautiful, says IBM

Lightweight foundation models, transparency and open source are key to IBM’s AI strategy

Image:
Dr Juan Bernabé-Moreno, director of IBM Research UK and Ireland

Dr Juan Bernabé-Moreno, director of IBM Research UK and Ireland talks about the barriers to AI in the enterprise, the importance of openness, the move to small, specialised models based on trusted data and IBM’s strategy.

There’s a gulf between the potential that enterprises see for AI and what how they are currently using it.

For many, "enterprise AI” still means accessing publicly available frontier foundation models like Claude and ChatGPT or Microsoft CoPilot via the web API and deploying them for business purposes, such as assisting with coding, summarising documents, or coming up with vivid marketing copy.

But this definition of enterprise AI only scratches the surface. The use cases are limited, and expanding upon them is risky, says Dr Juan Bernabé-Moreno, director of IBM Research UK and Ireland.

First, the output from these models is unreliable. While it’s possible to refine your prompts with your own documents using retrieval augmented generation (RAG), the results will still be prone to inaccuracies because of vagaries inherent in the way the model was trained and the data it was trained on.

To reduce these inaccuracies, which could be seriously problematic or risky in certain use cases, organisations need to combine their own data with the model and then fine-tune it for their use cases.

But adding your own organisational data to the frontier model carries a risk that it will combine with the data already in the model in unpredictable ways. Worse, sensitive information could find its way into the wrong hands. There's also a risk of attribution, where a third party could find evidence that their data has ended up in the model you are using in production and sue for copyright.

On the other hand, if you’re not prepared to utilise company data in this way, your enterprise AI journey pretty much ends before it’s got started, said Bernabé-Moreno.

"If you don't use your own data with the foundational models the use of GenAI is very limited, right? It remains 'rewrite this email or compose a song'. So, for real enterprise use cases, what's important is not just to have a trusted model, but also to have a mechanism so that you can ground this model with your enterprise data."

All your data are belong to AI

By now pretty much every last bit of text data on the existing public web has found its way into the largest foundation models, which is why the big companies are all scrambling get their hands on as much new stuff as they can.

By comparison, says Bernabé-Moreno, only 1% of enterprise data has been used to train AI models, largely because of the issues mentioned above. "There's a technical challenge, and there's a trust challenge. There are many, many reasons why this data, which is represents the real value of the organisation, isn't there."

For the enterprise, bigger AI is not better AI

Bernabé-Moreno defines a foundation model as "an AI representation of your data, with a lot of features.

“One feature is that you can use this AI representation of your data and specialise it to solve particular tasks."

The model is thus both data and functionality; the two are inseparable, which, he says, "requires a new mindset" among IT leaders.

The data on which large multimodal models are trained is unknowable. They are also hugely expensive to train and operate. Other barriers to enterprise use include slow inference and a requirement for specialised infrastructure.

This is why many AI companies, including Meta which produces one of the biggest LLMs, are also focused on the other end of the scale, making small, specialised, customisable models that can run on a laptop, or even a phone. Last week Meta released Llama 3.2 1B and 3B, two text-only models designed to run on edge devices.

IBM is on a similar trajectory. IBM's pitch for enterprise GenAI revolves around two pillars: small, customisable foundation models trained on controlled datasets, and open source licensing.

The company's Granite foundation models are all specialised for certain types of task, currently coding, language, time-series and geospatial. Most are in the 7-40 billion parameter range, although one, TTM, a pre-trained model for multivariate time-series forecasting, comes in at less than one million parameters. This has already been downloaded more than a million times from Hugging Face since it was released in May.

Among the latest models are the granite-geospatial-wxc-downscaling series, a set of fine-tuned geospatial foundation models based on Privthi, a model pre-trained by NASA and IBM on 40 years' of NASA's geospatial archives, including satellite images and weather and climate data. These, says IBM, allow researchers to predict meteorological events with fine granularity and the ability to customise the same model for rainfall, flooding, vegetation, fire or other phenomena. Previously, individual models would need to be pre-trained for each use case, a lengthy and expensive process requiring a lot of data, compute and energy.

"With these models that we've created with NASA, the pre-training has been done already," said Bernabé-Moreno. "For example, if you want to especially specialise the model to understand the extent of a flooding in a particular area, you don't have to start from scratch. You just take the pre-trained model and you fine-tune it. Then with the same model you can, you can specialise it for predicting wildfires, or flooding, or to predict the above-ground biomass."

He continued: "The beauty of these foundational models is the ability to scale. You pre-train them once, then you can really use this model at scale. It has changed the way that enterprises can leverage AI."

Open AI

IBM has also put its weight behind open AI (not to be confused with Sam Altman’s company), open sourcing weights, models and training data under the permissive Apache 2.0 licence.

As well as the usual open-source advantages of rapid innovation and bigger audiences, IBM hopes that openness and transparency will help enterprises overcome their reticence in combining their data with the foundation models. The company removes another potential hurdle by guaranteeing IP indemnity (contractual protection) for users of its foundation models.

"We are so certain about the data that went into the models that we indemnify if there's a copyright problem," said Bernabé-Moreno.

Continuing the open theme, the company is a prominent member, along with Oracle, Meta, Intel and Red Hat, plus NASA and educational and research establishments - more than 100 organisations in all - of the AI Alliance, a group that promotes the collaborative, open development of AI, and which, according to Bernabé-Moreno, is growing strongly.

"It's growing in part because many people think that AI innovation cannot be put in the hands of a few players," he said.

Asked whether OpenAI, Microsoft or Amazon were likely to swell the ranks of the AI Alliance further, he thought not. "They work in a different way for their own reasons, but we think, we really think that the future should be open."