Harnessing the 'irrational exhuberance' around AI - CNCF's Priyanka Sharma
CNCF chief on meeting the sky high expecations of genAI
In her opening keynote at KubeCon + CloudNativeCon Europe in Paris, Priyanka Sharma, executive director of the Cloud Native Computing Foundation (CNCF), spoke of AI being in an "irrational exhuberance" phase, meaning that expectations are moving well ahead of delivery.
Which is not to say that irrational exuberance, a term associated with stock market bubbles, will necessarily lead to a crash; in fact most rapidly growing technologies go through similar phase. But it's for its for proponents to ensure that reality catches up.
However, for many organisations, AI experiments have already translated into ballooning infrastructure costs and added stress for those implementing it, often platform engineers who are required to step in when things get out of hand. Put simply, prototypes are easy, scaling to production is hard.
We've been here before, Sharma said, when cloud computing was all the rage. "As you may recall ... the irrational exuberance of a new type of user experience leads to rapid prototyping but that didn't flatten the bills."
CNCF was formed 10 years ago to support and create standards around the then-new Kubernetes ecosystem, among them the Open Container Initiative, which has many parallels with the AI/ML world. Just as standards were needed to allow the deployment of containerised applications across platforms, so the AI landscape requires commonalities for interoperability and consistency in how workload loads are built, run and deployed.
In another lesson from history, just as developer and operations roles had to come together to form DevOps, so infrastructure engineers, data scientists and AI model developers need to understand each other's needs better, said Sharma, predicting increased cross-pollination between the cloud native and AI worlds.
Kubernetes as standard
Kubernetes is now a de facto way of deploying large-scale enterprise apps, so it's no surprise that it has also become a standard for developing and deploying AI models in the enterprise too, helped by the fact it can run on anything from a single laptop to a huge GPU cluster. As such, the cloud native ecosystem can support development from prototyping to production.
"Kubernetes is having its Linux moment," said Sharma, referring to the fact that Kubernetes support is now assumed. "Kubernetes is AI's engine room. Cloud native provides the underpinning on which AI dreams will be built".
But the practicalities of this underpinning are not well understood, and indeed are still a work in progress.
CNCF's AI working group has just released a white paper outlining how dozens of CNCF projects can play well together in supporting AI, and to address the remaining complexities.
One of these complexities, from a flexibility point of view, is the presence of walled gardens. Currently, developers seeking to scale up their experiments are "stymied by the problems of proprietary cloud-based solutions," Sharma said.
"Proprietary or less open options can be faster to adopt sure, but there's a cost. You're getting into an opinionated solution that offers less configuration and interoperability."
Organisations can become, wittingly or unwittingly, locked into proprietary systems, which could harm them in the long run, she told Computing.
"If you're using a proprietary model, you don't know how it's built, what is the provenance. It may also be taking your data, are you comfortable with that?"
They can get locked into the value added services, and also into the infrastructure, particularly in view of the GPU scarcity which has centralised power in a few hands.
The answer is to open up more open source paths through the AI space, from the infrastructure layer through to the models and the data layers which are often proprietary. Specifically she called for permissive licencing like Apache 2 or MIT for the tooling, to allow organisations to create AI solutions that are right for them.
To this end, the CNCF AI working group has create a reference architecture featuring open source projects to help projects to scale, in the same way that Kubernetes facilitates the deployment of large distributed applications and containers allow applications to be conveniently packaged.
All open?
But should the models themselves be open source? This is more nuanced, she told Computing. If you're trusting something with your data you should be able to understand how it's utilising it. But an AI model has many elements, including the training data, the weights and the model framework itself.
"Some folks say all of that needs to be made open so that you can really play with the model and make it your own. But then if you share the entire data set the entire details of a very complex, very large model, then the average person is not going to have the resources to recreate that model, and all you're doing is helping your competitors who are these big organisations that have that kind of GPU power."
There are many different facets to open, not all of them open source, but the big picture is about coming together in the spirit of transparency and cooperation, said Sharma.
"Cloud native isn't just connecting AI development with AI infrastructure, it's facilitating the connection between services teams and people. Our projects and ecosystem allow folks to focus on their areas of expertise and provide a trusted interface on which we can all act."
She expressed a hope that CNCF members would step up to fill other gaps in the AI picture, including areas such as ethical AI, AI safety and addressing models' massive carbon footprint.
"CNCF knows how to do the big things," she said.