Technology

Here’s what the CEO of ChatGPT maker OpenAI has to say about the “dangers of AI”.

ChatGPT, developed by US company OpenAI, has seen several upgrades since its inception. The AI ​​capabilities of the underlying technology have matured to such an extent that the question has arisen as to whether AI can replace jobs and be used to spread misinformation. Sam Altman, CEO of OpenAI, has now openly said that he was “a little scared” about his company’s invention, but is positive about the good it can do. Speaking to ABC News, Altman said he believes AI technology has real dangers, but also “may be the greatest technology humanity has come up with” to drastically improve people’s lives.

“We have to be careful here. I think people should be glad we’re a little scared of that,” Altman was quoted as saying. He said that if he wasn’t scared, “you should either not trust me or be very unhappy that I’m in this job.”

Altman said AI will likely replace some jobs in the near future and is concerned about how quickly that could happen. However, he also pointed to the bright side that technology will improve our lives.

Expand

“I think humanity has proven over a couple of generations that they’re wonderful at adapting to major technological changes,” Altman said, adding, “But if that happens in a single-digit number of years, some of those changes will… That’s.” the part I worry about the most.”

“It will eliminate a lot of current jobs, that’s true. We can do much better. The reason for the development of AI at all, in terms of the impact on our lives and the improvement of our lives and the benefits, this will be the greatest technology humanity has yet developed,” Altman noted.

He also encouraged people to use ChatGPT as a tool rather than a replacement. Altman also discussed the positive impact of AI on education.

“We can all have an incredible educator in our pocket, tailored just for us and helping us learn. Education needs to change,” he said.

AI use in misinformation

For Altman, a persistent problem with AI language models like ChatGPT is misinformation. He said the program can give users factually inaccurate information.

“The thing I’m most trying to warn people about is what we call the ‘hallucination problem.’ The model will confidently present things as if they were facts that are entirely made up,” he said, adding that GPT-4, the latest language model, is more powerful than the one that launched ChatGPT.

“The proper way to think about the models we create is as a reasoning engine, not a fact database,” Altman said.

“They can also act as a fact bank, but that’s not really what makes them special – we want them to do something that’s more like the ability to reason than memorize,” he added.

The company’s top executive noted that the technology itself is incredibly powerful and potentially dangerous.

Source

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *