Chief scientist and superalignment lead Ilya Sutskever parts ways with OpenAI
Superalignment co-lead Jan Leike follows hours later
Superalignment will be dealt a serious blow by the loss of its leaders, and the lack of transparency is raising concerns and fueling speculattion of an exodus from OpenAI
It's been a busy 48 hours for OpenAI. On Monday, the company showcased it's newest LLM GPT-4o in a carefully choreographed video stream from its San Franciso headquarters. On Tuesday afternoon, Ilya Sutskever, OpenAI co-founder and chief scientist resigned. He tweeted:
"I have made the decision to leave OpenAI. The company's trajectory has been nothing short of miraculous, and I'm confident that OpenAI will build A.G.I. that is both safe and beneficial." A.G.I., or artificial general intelligence, is an as-yet-unbuilt technology that can do anything the brain can do."
In a blog post Sam Altman said:
"Ilya and OpenAI are going to part ways. This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend. His brilliance and vision are well known; his warmth and compassion are less well known but no less important.
"OpenAI would not be what it is without him. Although he has something personally meaningful he is going to go work on, I am forever grateful for what he did here and committed to finishing the mission we started together. I am happy that for so long I got to be close to such genuinely remarkable genius, and someone so focused on getting to the best future for humanity."
It's the future for humanity part of that statement that is troubling people. Sustkever's departure, on the face it looks very civilised, but has raised concerns for two reasons.
Firstly, Sutskever was instrumental, along with several other board members in forcing out Sam Altman as CEO last year. The coup was short-lived. Sutskever went public after a few days saying he regretted his actions and threatened to resign if Altman wasn't reinstated.
The reasons for Altman's removal and reinstatement have never been made public, and although Suskever remained an employee of OpenAI he didn't return to work.
The second reason was Sutskever's leading role in OpenAI's superalignment project. Superalignment is (or possibly was) an attempt to solve the problem which is troubling so many of us which is how to ensure that Artificial General Intelligence (AGI) when it does arrive, follows human intent. How do we humans control technology which is a lot cleverer than we are?
The fact that humanity as yet doesn't have a way of controlling AGI hasn't stopped the likes of OpenAI going all out to develop it, but the superalignment project was allocated one fifth of all OpenAI computing resource to try and catch up.
OpenAI also launched a $10 million grant programme to support research on superalignment and plans to host an academic conference on the project next year to share and promote the work funded by the program.
In addition to Sutskever's departure, early this morning his co-leader of the superalignment project Jan Leike also announced that he had resigned.
Some expect further resignations from both the superalignment team and the wider OpenAI executive team. Jakub Pachocki will take the role of chief scientist.