Genuinely intelligent AI will be unpredictable, warns former OpenAI chief scientist

AI will need to adapt as reliable pre-training data runs dry and reasoning agent models takeover, says Sutskever

Image:
AI will become less predictable as it becomes capable of reasoning

Artificial intelligence capable of genuine reasoning will make AI much less predictable, warns former OpenAI chief scientist. He also argues that we have reached “peak data.”

Ilya Sutskever also argued that pre-training AI models with huge amounts of data largely scraped from the internet is rapidly approaching a ceiling. This is mainly due to the physical limits of freely available data.

“Pre-training as we know it will unquestionably end. Why? Because while compute is growing through better hardware, better algorithms, and large clusters… the data is not growing because we have but one internet, and you could even say that [pre-training] data is the fossil fuel of AI: it was created, now we use it and now we’ve run out,” Sutskever told the Conference on Neural Information Processing Systems (NeurIPS) on Friday.

What comes next, he suggests, are agents supported by “synthetic data”, leading to what Sutskever labelled “superintelligence”, which will be qualitatively different from current AI models.

The current crop of AI models and chatbots are both brilliant and a bit stupid, he continued. “Sooner or later the following will be achieved: superintelligence will be ‘agentic’ in real ways; they will actually reason… [but with] a system that reasons, the more it reasons the more unpredictable it becomes. All the deep learning we’ve become used to is very predictable…”

Indeed, it’s more akin to human intuition, which itself is largely based on ingested data of all types.

He continued: “We will have to deal with AI systems that are incredibly unpredictable. They will understand things from limited data; they will not get confused – I’m not saying how or when, I’m saying that it will.”

Moreover, he said, that AI will develop a form of self-awareness. “When all this comes together, we’ll have systems of radically different properties and qualities compared to today.”

But, like humanity, they will “have issues” – but that he chose to leave to people’s imaginations.

Sutskever was part of the team that ten years ago developed an AI based on an auto-regressive neural network model. Auto-regression hypothesises that if an AI is “able to predict the next token well enough, then it will grab, capture and grasp the correct distribution over whatever sequences come next”.

While it wasn’t the first such auto-regressive model, the researchers believed that it could form the basis for a workable AI – which it did with the development of ChatGPT.

Ultimately, that work rested on simply being able to train a large neural network – hence, massive amounts of compute power – and a large dataset, “then success is guaranteed”.

Sutskever parted company with OpenAI earlier this year, founding his own company Safe Superintelligence Inc. which has a stated mission of putting safety at the heart of the race towards AGI.

You can listen to the full talk here.