It's agents, agents everywhere at the Google Cloud Summit

Autonomous AI agents are ‘the next phase of AI evolution’, says Google

Image:
It's agents, agents everywhere at the Google Cloud Summit. Credit: Google.

Agentic AI was the big topic of conversation during the first day of the Google Cloud Summit in London on Wednesday.

Agentic AI implies moving beyond using LLMs to simply answer questions or generate content based on a prompt from a human being, to having agents utilise models, data sources and external tools to carry out tasks more-or-less autonomously.

Where "traditional" AI/ML stops and agentic AI starts is not cut and dried (among people I asked, estimates of agentic AI's general availability varied from "several years away" to "it's here now"), but it would certainly seem to be logical next step, given the way that tech generally evolves to join together previously standalone systems. It also implies a platform approach with secure connectivity and low latency, which fits well with Google's full stack approach to AI, and which is no doubt why we heard so much about it.

Google Cloud VP of AI, Oliver Parker, described agentic AI as "the next phase of AI evolution."

Many organisations are already using different models for different discrete use cases and the next phase is to use agents, or chains of agents, to link these uses cases together so that multi-step tasks can be automated, Parker said.

"It's where you go from action to outcome," he added. "LLMs come to life through agents."

According to Parker there are six broad domains where Google is focusing its efforts in developing AI agents, based on its Gemini LLM: customer, employee, creative, data, code and security.

Customer service agents

Customer service is perhaps the most familiar area as the familiar chatbots evolve to take on board new information during a conversation, and with the ability to take action on the users' behalf.

At what stage exactly a chatbot morphs into an agent is moot, but Sandy, loveholidays' current bot, can handle 500 different "intents" and answers 55% of customer enquiries without human intervention, at an annual saving of £3 million, said the travel site's CTO Mike Jones. By interfacing with Google Translate Sandy is multilingual; it can send also information by text, a rudimentary use of tools that arguably makes it an agent. Whatever you want to call it, it's driving impressive efficiencies, he said.

"The feedback from customers is amazing," commented Jones during a press briefing. "Particularly from the older generation who can't believe they've just been helped by a robot."

Lloyds Bank, on the other hand, has paused its public rollout of customer-facing banking bots. As a regulated organisation charged with safeguarding customers' finances, the consequences of any unpredictable behaviour were considered too serious. "We want to make sure we're really confident that the guardrails around these new use cases are robust," said chief data and analytics officer Ranil Boteju (pictured, right).

Nevertheless, Lloyds is forging ahead with GenAI and basic agentic AI in other areas including application transformations, threat intelligence and in the back office.

The bank takes a three-bucket risk-based approach. In the first category, low-risk AI-based automation of the back office is subject to much less stringent oversight than "human in the loop" use cases such as code conversion, where checkpoints are built in. Finally, in the third category are those use cases that are subject to regulation, including customer-facing automated banking bots, which are now being developed in the background.

"We call them AI-as-a-channel where - and it's going to take us two or three horizons - it's almost a reimagining of what a banking experience could be." Boteju explained. "You've got a set of intelligent agents that can actually help you with all of your banking."

More agents

Creative Agent is one of 130 or so models that sit on Google's Vertex AI platform, promising to help marketing and design professionals create new images and storyboards. Meanwhile Pipet Code Agent is described as an "AI-powered code assistance tool", which doesn't sound particularly agentic, and there are security agents to "assist security operations by radically increasing the speed of investigations".

On the data side, Google announced three upcoming data agents for three types of user: business users (to be released as an API later this month), data analysts and data engineers. Up to 90% of a data scientist’s work is data wrangling, said Yasmeen Ahmad, product executive data, analytics and AI at Google Cloud, and this is one area where agents that could make a big difference, for example, in obfuscating sensitive data, augmenting data sets, checking data pipelines, identifying bad data, and so on: repetitive tasks that could and probably should be automated.

"Data wrangling is hard and complicated," she said. "That's why data engineering is a place we are focusing on because we think that agents could have a huge impact."

The data piece really needs to be in place before agentic AI can become a reality Ahmad added.

“What's critical is these agents will not be able to operate on today's messy data ecosystems. An agent to be able to [work] end-to-end, it needs to be able to do that in real time.”