At the sea’s edge: Bringing AI from the cloud to the ocean

Where AI starts giving back to the energy sector

Miles off the British coast, one company is merging data-hungry AI with lightweight edge computing.

Generative AI is all the rage today – who doesn’t want to offload busywork? - but traditional analytical AI is still enabling new ways of working in critical industries.

Offshore wind is “one of the clearest, if not the clearest, routes to reducing our carbon emissions from energy production,” says Joe Tidball, EVP of service innovation at Beam, which focuses on servicing wind farms.

But “clear” isn’t the same as “easy” or “cheap.” Like any sector operating in remote environments, offshore wind faces expensive challenges in terms of maintenance – which is where AI comes in.

The traditional way to operate offshore is to take “a very large boat with a lot of people on it, [and] a not-very-smart-robot with essentially a camera: one person flying it, one person doing the speaking [to highlight maintenance issues].”

Sending out 50 people in a 75m-long boat, releasing tens of tons of CO2 every day, is a challenge. Since 2016, Beam has been working to improve the process.

“We concentrate on autonomy, robotics, AI, 3D vision,” says VP of data and artificial intelligence Cate Seale.

“Basically, if we can parallelise out the process of operating and servicing and maintaining an offshore wind farm using autonomy, AI, and moving operations remotely, that's what significantly reduces the cost and that barrier to data acquisition – and, ultimately, the decisions that need to be made by whoever owns the infrastructure.”

One of the major draws of automation is allowing operators to supervise more robots at once, meaning fewer people and smaller vessels.

If the robot’s autonomous, then you don't need someone to fly it,” Joe points out.

“We’re basically going through a list of all the problem spaces [and asking] ‘How do we use technology to move people, reduce the size of the boat, and make this far more efficient?’”

Handling hallucinations

In offshore work, getting people to the site isn’t even half the battle; there’s lots to do in operating, servicing and maintaining turbines.

Using remotely operated underwater vehicles (ROVs), Beam’s computer vision system SubSLAM X3 maps a site to produce a digital twin – accurate down to the “millimetric” level – to speed the process.

“You can actually see, ‘Oh, there was a starfish there last year, and it's not here this year.’ It's that level of detail,” says Joe.

That translates into the ability to track changes over time, like a crack developing. Is it just part of the concrete settling, or is it spreading and turning into something more serious? And can we, as humans, even see it?

That last is a foible of AI. Cate explains:

“Let's say you're using synthetic data to train a machine learning model. You might use generative AI to render different characteristics of the scene...

“What you don't want to do is [let] that generative AI change the meaning of those pixels in that image: from, let's say, damage or peeling paint - or a crack in a turbine. That's a catastrophic thing that you need to fix right away, or has potential to be.

“If your gen-AI was like, ‘Oh, that looks unusual, that's a rare thing to see. I'm just going to smooth that out, it's probably just a mistake,' you need to be able to catch that.”

Life at the edge

A level of imagery precision that goes down to millimetres should generate at least terabytes of data. Processing and analysing massive information like that is a job ill-suited to edge computing at a remote site, but Beam found a workaround.

“To generate the 3D models generally does need all of that image data, to do the stitching and pull it all together,” says Joe, “but ultimately, once that model is generated, it represents far less data than the data that was used to create it.”

Beam “wipes away” unnecessary data that isn’t needed after the inspection. Cate explains:

“The technology that's on our camera systems is converting [video data] into that 3D model. That 3D model is absolutely tiny in comparison to that big data bank and can be streamed back in real time, even in that low bandwidth environment underwater.”

The company has even managed to get AI systems working at the network edge by using “a quite narrow model” that can do a specific task very efficiently and accurately.

“They’re very small, lightweight models that doesn't have billions of parameters like some of these large language models. That can be really valuable, because you can put that on a camera system underwater, and you can stream back the results rather than the data itself.”

Focusing on results rather than raw data means insights reach Beam’s clients “much quicker,” speeding decision-making.

Analytical AI has taken a backseat in the last two years, but Beam’s example proves there’s still life in the old seadog.