Open source alternatives to ChatGPT could make AI more dangerous
According to a British AI pioneer, open-source large language models (LLMs) could make artificial intelligence more dangerous. Geoffrey Hinton, dubbed the “godfather of AI” for his pioneering work on neural networks, believes the technology is more likely to be exploited when its code is freely available online.
In recent months, large language models have been developed by AI labs around the world (Photo by Tada Images/Shutterstock).
Large language models such as OpenAI’s GPT-4 or Google’s PaLM form the basis for generative AI systems such as ChatGPT, which have enjoyed rapid adoption by businesses and consumers in recent months. The ability of these tools to automatically generate detailed images and text in seconds could be game-changer for many industries, but due to the closed nature of AI models — and the high development costs involved — they can be expensive to access.
Many argue that open-source LLMs can provide a more cost-effective alternative, especially for small businesses that want to harness the power of AI and leverage tools like ChatGPT.
The problem with open source LLMs
But Hinton, who says he quit his job at Google last month to freely voice his concerns about AI development, believes the growing open-source LLM movement could be problematic.
Speaking at the Cambridge University Center for the Study of Existential Risk on Thursday night, Hinton said, “The danger of open source is that it allows more crazy people to do crazy things with it.” [AI].
He said he believes LLMs, which remain confined to the labs of companies like OpenAI, could ultimately prove beneficial. “If these things get dangerous, it might be better for a few big companies — preferably in several different countries — to develop these things while also finding ways to keep them under control.”
“Once you open source everything, people will start doing all kinds of crazy things with it. It would be a very quick way to find out how [AI] can go wrong.”
Hinton used his presentation to reiterate his belief that the point at which the capabilities of a so-called super-intelligent AI will surpass human intelligence is not far off, saying that he believes GPT-4 is already showing signs of intelligence . “These things are going to get smarter than us, and that could happen soon,” he said. “I used to think it would be another 50 to 100 years, but now I think it’s closer to 5 to 20. And if it happens in five years, we can’t just leave it up to the philosophers to decide what we do.” For that, we need people with practical experience.”
Content from our partners
He added: “I wish I had a simple answer [for how to handle AI]. I suppose the companies developing it should be forced to do a lot of work to verify security [AI models] how to develop them. We need to gain experience with these things, how they try to escape and how to control them.”
On Friday, DeepMind, the AI lab of Hinton’s former employer Google, announced that it had developed an early warning system to identify potential risks from AI.
How companies can benefit from open source LLMs
Open-source LLMs are relatively plentiful online, especially since the source code of Meta’s LLM, LLaMa, was leaked online in March.
Software vendors are also trying to capitalize on companies’ growing desire for installable, targeted, and customizable LLMs that can be trained on enterprise data. In April, Databricks released an LLM called Dolly 2.0, which it described as the first open-source, batch LLM for commercial use. According to Databricks, it has ChatGPT-like functionality and can run internally.
Proponents of open-source models say they have the potential to democratize access to AI systems like ChatGPT. Speaking to Tech Monitor earlier this month, software developer Keerthana Gopalakrishnan, who works with open-source models, said, “I think it’s important to lower the barrier to entry for experimentation.” She added, “There are a lot of people who are interested in this technology and really want to innovate.”