Sam Altman, CEO of OpenAI, warns of the danger of AI and assumes that ChatGPT will eliminate jobs
American entrepreneur and CEO of the company OpenAI, Sam Altman, has warned about the danger of artificial intelligence (AI), saying that it poses real threats that will transform society.
Altman, whose company developed the all-talking-about AI chatbot ChatGPT, stressed the need for regulators and society to actively engage with the technology to prevent potentially negative consequences for humanity.
He expressed concern that with the high advancement of AI technology, it could be used for large-scale disinformation.
Join the Tekedia Capital Syndicate starting April 8th and own a piece of Africa’s best startups. Find out more on this one page. Tekedia Mini-MBA (June 5 – September 2, 2023) opens NEW registrations; Beat early bird for discounts by registering here.
In his words: “We have to be careful here. I think people should be happy that we’re a little afraid of it. I am particularly concerned that these models could be used for large scale disinformation. Now that they are getting better at writing computer code, they could be used for offensive cyber attacks.”
However, he noted that despite the danger it could pose, AI technology could be the greatest technology humankind has yet devised. Altman’s technology warm-up comes after his company OpenAI released the latest version of its speech AI model, GPT-4, less than four months after the original version was released and has become the fastest-growing consumer application in history.
Speaking in an interview, he explained that while the new version isn’t perfect, it has scored a 90% on the US bar exam and a near-perfect score on the high school SAT math test. It could also write computer code in most programming languages. He added that the large multimodal model can solve difficult problems with greater accuracy thanks to its broader general knowledge and problem-solving skills.
Also regarding the danger of artificial intelligence, Elon Musk, CEO of Tesla and Twitter, has repeatedly warned of the dangers of artificial intelligence. In 2018, during a speech at a technology conference, Musk stated that AI or AGI artificial general intelligence was more dangerous than a nuclear weapon and stated that there must be a regulator to oversee the development of superintelligence.
Musk worries that the development of AI will outstrip human ability to safely manage it. “There is no regulatory oversight of AI, which is a big problem. I’ve been calling for an AI safety regulation for over a decade!” Musk tweeted last December. He also expressed concern that Microsoft, which hosts ChatGPT on its Bing search engine, has disbanded its ethical oversight department.
Compared to AI, progress with Neuralink will be slow and easy to assess as there is a large regulatory body approving medical devices.
There is no regulatory oversight of AI, which is a *big* problem. I’ve been calling for an AI safety regulation for over a decade!
— Elon Musk (@elonmusk) December 1, 2022
As AI accelerates and becomes more widespread, the voices warning of the potential dangers of artificial intelligence are getting louder. The tech community has long debated the threats posed by artificial intelligence. Job automation, the spread of fake news, and a dangerous arms race in AI-powered weapons have been cited as some of the biggest threats posed by AI.
ChatGPT will eliminate jobs
Even more Sam Altman admitted that the ChatGPT could take many jobs off the market. He said so in an interview with ABC News on Thursday, where he also admitted he was “a little scared” of the AI-powered chatbot.
ChatGPT has become a darling of the corporate world since its launch late last year, with companies like Microsoft incorporating the model AI language into some of their services. This is due to the efficiency of AI in providing human-like context for queries.
With its ability to solve complex tests, write code and essays, ChatGPT3 has found widespread adoption, reaching more than 100 million users in less than three months of its launch.
Earlier this week, OpenAI announced the release of GPT4, which is said to have human-level intelligence. The company said the improved version can solve difficult problems with greater accuracy – a claim backed by many users.
“GPT-4 is more creative and cooperative than ever. It can generate, edit, and iterate with users for creative and technical writing tasks such as composing songs, writing screenplays, or learning a user’s writing style,” the company said.
GPT4’s improved performance has raised concerns that ChatGPT will shed many jobs. But Altman said that while the chatbot could replace many jobs, it could also lead to “much better ones.”
“The reason to develop AI in the first place, in terms of impact on our life and improvement of our life and benefit, it will be the greatest technology that mankind has developed so far,” he said.
GPT-4 outperforms ChatGPT by scoring in higher approximate percentiles among test takers, according to OpenAI.
Altman said Tuesday that it passes the bar exam and is able to get “a 5 on multiple AP exams.”
The OpenAI executive isn’t the only one who has voiced fears about artificial intelligence capabilities. Elon Musk, CEO of Tesla and SpaceX, who also confused OpenAI, has warned that it is one of the biggest threats to civilization and urged the government to step in to regulate it.
Altman told ABC that he is in regular contact with government officials, adding that regulators and society should be involved in ChatGPT’s rollout. It is hoped that government involvement will help address concerns arising from its use.
In several tweets over the past month, the 37-year-old has called for regulation. He said society needs time to adjust to something this big and warned the world might not be “that far from potentially scary” artificial intelligence.