There is no artificial intelligence
Comment on this story
Nobody is more adept at selling the future than the tech industry. According to their proponents, we will all live in the “metaverse,” building our financial infrastructure on “web3,” and powering our lives with “artificial intelligence.” All three of these terms are mirages that have raked in billions of dollars despite being bitten back by reality.
Artificial intelligence in particular conjures up the notion of the thinking machine. But no machine can think, and no software is truly intelligent. The phrase alone is perhaps one of the most successful marketing phrases of all time.
Last week, OpenAI announced GPT-4, a major upgrade to the technology underlying ChatGPT. The system sounds even more human than its predecessor and of course reinforces the idea of its intelligence. But GPT-4 and other large language models like this simply reflect text databases – nearly a trillion words for the previous model – the size of which is hard to imagine. Aided by an army of humans reprogramming it with fixes, the models piece together words based on probabilities. That’s not intelligence.
These systems are trained to generate text that sounds plausible, but are marketed as new knowledge oracles that can be plugged into search engines. That’s foolhardy if GPT-4 keeps making mistakes, and it was only a few weeks ago that Microsoft Corp. and Alphabet Inc.’s Google both suffered embarrassing demos in which their new search engines failed on facts.
Nothing helps: Terms like “neural networks” and “deep learning” only support the idea that these programs are human-like. Neural networks are in no way copies of the human brain; they are only loosely inspired by how it works. Many years of attempts to replicate the human brain with its approximately 85 billion neurons have all failed. Scientists came closest to the brain of a worm with 302 neurons.
We need a different lexicon that doesn’t promote magical thinking about computer systems and doesn’t absolve the people who design those systems of their responsibilities. What’s a better alternative? Sane technologists have been trying to replace “AI” with “machine learning systems” for years, but it doesn’t quite fall off the tongue.
Stefano Quintarelli, a former Italian politician and technologist, has developed another alternative, “Systemic Approaches to Learning Algorithms and Machine Inferences” or SALAMI, to underline the ridiculousness of the questions people ask about AI: Is SALAMI sentient? Will SALAMI ever have supremacy over humans?
The most futile attempt at a semantic alternative is probably the most appropriate: “software”.
“But,” I hear you ask, “what’s wrong with using a little metaphorical abbreviation to describe technology that seems so magical?”
The answer is that ascribing intelligence to machines gives them undeserved independence from humans and absolves their creators of responsibility for their effects. If we consider ChatGPT to be “smart,” then we’re less inclined to try to hold San Francisco startup OpenAI LP, its creator, accountable for its inaccuracies and biases. It also creates a fatalistic indulgence among people suffering from the harmful effects of technology; although “AI” won’t do the job for you or plagiarize your artistic creations – other people will.
The problem is becoming more pressing now as companies from Meta Platforms Inc. to Snap Inc. to Morgan Stanley are rushing to integrate chatbots and text and image generators into their systems. Spurred on by the new arms race with Google, Microsoft is building OpenAI’s largely untested language model technology into its most popular business applications, including Word, Outlook and Excel. “Copilot will fundamentally change how humans work with AI and how AI works with humans,” Microsoft said of its new feature.
But for customers, the promise of working with intelligent machines is almost misleading. “[AI is] One of those labels that expresses some kind of utopian hope rather than current reality, much like the emergence of the term ‘smart weapons’ during the first Gulf War implied a bloodless vision of absolutely precise aiming that is still not possible,” says Steven Poole , author of Unspeak, on the dangerous power of words and labels.
Margaret Mitchell, a computer scientist who was fired from Google after publishing a paper criticizing the biases in large language models, has grudgingly described her work as “AI” based for the past few years. “Before…people like me said we were working on ‘machine learning.’ It’s a great way to make people’s eyes shine,” she admitted at a conference panel on Friday.
Her former Google colleague and founder of the Distributed Artificial Intelligence Research Institute, Timnit Gebru, said she also didn’t start saying “AI” until around 2013: “It became the thing to say.”
“It’s awful, but so do I,” Mitchell added. “I call everything I touch ‘AI’ because then people will listen to what I say.”
Unfortunately, “AI” is so embedded in our vocabulary that it will be almost impossible to shake it, the obligatory quotation marks are difficult to remember. At the very least, we should remember how dependent such systems are on human managers who should be held accountable for their side effects.
Author Poole says he prefers to refer to chatbots like ChatGPT and image generators like Midjourney as “giant plagiarism machines” because they mainly recombine prose and images originally created by humans. “I’m not confident that it will catch on,” he says.
We’re really stuck with “AI” in more ways than one.
More from the Bloomberg Opinion:
• GPT-4 could turn work into a hyper-productive hellscape: Parmy Olson
• AI is about to transform childhood. Are we ready?: Tyler Cowen
• Competition with robots makes work worse: Sarah Green Carmichael
Want more from Bloomberg Opinion? Terminal reader, go to OPIN . Web readers, click here. Or subscribe to our daily newsletter.
This column does not necessarily represent the opinion of the editors or of Bloomberg LP and its owners.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for The Wall Street Journal and Forbes, she is the author of We Are Anonymous.
For more stories like this, visit bloomberg.com/opinion