‘We’re at the start of the hockey stick’: The difference between open AI and OpenAI
And why no AI company has achieved open source’s goals
AI drove every conversation at State of OpenCon 2025.
It's a rich time to be in open source. The movement, which aims to promote open, transparent software and practices, has enjoyed a surge in interest with the release of the first open AI model (though the definition is debated), DeepSeek - and so, predictably, much of the conversations at this year's State of OpenCon conference revolved around artificial intelligence.
Just a few days before the conference opened, OpenAI’s Sam Altman said publicly that the company has “been on the wrong side of history” when it comes to open source, possibly signalling a change in approach even among the big players – OpenAI being ironically infamous for moving towards a closed model. The mood at the event was optimistic.
But, several warnings notes were sounded through the day. The first came from Guy Podjarny, founder of Snyk and Tessl, who stressed that “AI is not software.”
“It is made up of software but it’s also data, weights, methodologies. When we talk about open source AI, we need to go back and ask what we want from it.”
Staying true to the open source model, open AI (not the company – a misunderstanding that cropped up many times through the day) must prioritise trust and transparency, a far cry from most of today’s proprietary models.
On top of that, Guy said, open source AI needs to be independent from its developers - with the ability to fork - and interested parties need to be able to contribute to its development. No model today, even DeepSeek, exhibits all these properties.
“Freeware models like Llama and DeepSeek are great, but they’re not open. You can download them, but have to reverse engineer them to understand how they work.”
Martin Woodward, VP of developer relations at GitHub, opined that we’re “at the beginning of the hockey stick of [AI] evolution, and we don’t know if it will spike or level out.”

There’s a tendency today to think of AI as an ecosystem like the cloud, but Martin warned against that view. Technologists and companies need to embrace flexibility with their model choice:
"As IT people we’re always looking for the best, but that’s not how [AI models] work. They work comparably well for a lot of the tasks but have different personalities based on their training data. Often, model choice is determined by trade-offs in the time you want to spend and the accuracy you need. You need to be very, very careful [about paying for a model].”
DeepSeek looms large
DeepSeek dominated State of OpenCon, with almost every speaker at least referencing the Chinese firecracker.
Guy said the model’s development, which famously uses Nvidia’s stripped down H800 GPUs, was “a bit of an ironic backfiring” of US import restrictions on AI chips going into China.
As well as the cost of its development, DeepSeek has attracted coverage because of accusations of data theft – although, as Guy noted, “You don’t need to be very sophisticated to run some basic .NET queries and extract information.”
The accusations have, however, spurred important questions about data ownership and licenses, which are “not fit for purpose.” For example, who owns an application produced using code from an AI model? Is it the company who entered the prompt or the one that originally developed the model?
More collaboration is needed between the open source community and governments, as well as standards bodies, before open AI is really ready for the mainstream. Handily, a panel later in the day touched on just that topic – stay tuned to Computing for our coverage.