Ilya Sutskever raises $1 billion for new safety-focused AI firm

The new venture aims to address concerns about the potential risks of advanced AI systems

Ilya Sutskever, a key figure in the development of AI, has raised $1 billion from investors for his new AI firm, Safe Superintelligence or SSI.

Sutskever, who previously served as the chief scientist at OpenAI, left the company in May following disagreements over its approach to AI safety.

His new venture aims to address concerns about the potential risks of advanced AI systems.

Joining Sutskever in this endeavour are Daniel Gross, a former Apple executive, and Daniel Levy, another former OpenAI employee.

The company has established offices in Palo Alto and Tel Aviv.

SSI plans to use the funds to acquire computing power, hire top talent, and build a highly trusted team of researchers and engineers, according to a post on X.

Investors in SSI include prominent venture capital firms such as Andreessen Horowitz, Sequoia Capital, DST Global and SV Angel.

NFDG, an investment partnership led by Nat Friedman and SSI's CEO Daniel Gross, also participated in the funding round.

AI safety has become a pressing concern as fears of rogue AI systems posing a threat to humanity have grown. A recent bill proposed in California aims to regulate AI companies, but it has faced opposition from industry giants like OpenAI and Google.

SSI's business model is designed to prioritise safety and security over short-term commercial gains.

In an interview with Reuters, Sutskever outlined his plans for the startup. He explained that SSI aims to address a critical challenge in AI development: ensuring the safety and alignment of super-intelligent systems with human values.

Sutskever believes that the future of AI will be significantly impacted by the development of such systems, and that SSI is well-positioned to make a significant contribution.

"We've identified a mountain that's a bit different from what I was working [on]... once you climb to the top of this mountain, the paradigm will change... Everything we know about AI will change once again. At that point, the most important superintelligence safety work will take place." Sutskever said.

When asked about the possibility of releasing an AI system that is as smart as humans before achieving superintelligence, Sutskever highlighted the importance of safety and the need for careful consideration.

"I think the question is: Is it safe? Is it a force for good in the world? I think the world is going to change so much when we get to this point that to offer you the definitive plan of what we'll do is quite difficult," he noted.

"I can tell you the world will be a very different place. The way everybody in the broader world is thinking about what's happening in AI will be very different in ways that are difficult to comprehend."

To determine what constitutes safe AI, SSI plans to conduct extensive research and explore various approaches. Sutskever noted that the definition of safety may evolve as AI systems become more powerful.

He also highlighted the importance of understanding the scaling hypothesis and its implications for AI safety.

"Everyone just says 'scaling hypothesis'. Everyone neglects to ask, what are we scaling?"

He said the scaling formula used in current AI models may change, leading to increased system capabilities and heightened safety concerns.

While SSI currently keeps its primary work proprietary, Sutskever expressed hope for future opportunities to open-source relevant superintelligence safety research. He believes that sharing knowledge and collaborating with other researchers is essential for addressing the challenges of AI safety.