opinion | What is the worst case AI scenario? Human Extinction.

(Washington Post illustration/Images by Getty Images/iStockphoto)


Émile P. Torres is a philosopher and historian of global catastrophe risk.

Humans are bad at predicting the future. Where are our flying cars? Why aren’t there robot butlers? And why can’t I vacation on Mars?

But we weren’t just wrong about things we thought want to go Humanity also has a long history of falsely assuring certain now inescapable realities would not. The day before Leo Szilard developed the nuclear chain reaction in 1933, the great physicist Ernest Rutherford proclaimed that anyone promoting nuclear power was “talking swindles.” Even the pioneer of the computer industry, Ken Olsen, is said to have said in 1977 that he did not foresee that private individuals would be able to use a computer at home.

Obviously we live in a nuclear world and you probably have a computer or two within reach right now. In fact, it is these computers—and the exponential advances in computing in general—that are the subject of some of society’s most important predictions today. The conventional expectation is that ever-increasing computing power will be a boon to mankind. But what if we’re wrong again? Could artificial superintelligence do us great harm instead? Our extinction?

As history teaches, never say never.

It seems only a matter of time before computers become smarter than humans. This is a prediction we can be pretty confident about – because we’re already seeing it. Many systems have gained superhuman abilities at certain tasks, like Scrabble, chess, and poker, where people now routinely lose to the bot across the board.

But advances in computing will bring more and more systems with them General Levels of Intelligence: Algorithms capable of solving complex problems across multiple domains. Imagine a single algorithm that could beat a chess grandmaster, write a novel, compose a catchy tune, and drive a car through city traffic.

According to a 2014 survey of experts, there is a 50 percent chance of achieving “human-scale machine intelligence” by 2050 and a 90 percent chance by 2075. Another study by the Global Catastrophic Risk Institute found that at least 72 projects around the world with the explicit goal of creating an artificial general intelligence — the stepping stone to artificial superintelligence (ASI) that would not only work as well as humans in all areas of interest, but far exceed our best skills.

The success of any of these projects would be the most significant event in human history. Suddenly something would join our species on the planet smarter as we. The benefits are easy to imagine: an ASI could help cure diseases like cancer and Alzheimer’s, or clean up the environment.

But the arguments for why an ASI could destroy us are also strong.

Surely no research organization would develop a malicious Terminator-style ASI bent on destroying humanity, right? Unfortunately, that’s not the concern. If we are all wiped out by an ASI, it almost certainly will on accident.

Since the cognitive architectures of ASIs can be fundamentally different from ours, they may be the most The unpredictable in our future. Consider these AIs that are already beating humans at games: In 2018, an algorithm playing the Atari game Q*bert won by exploiting a loophole “that no human player…is said to have ever uncovered.” Another program became an expert in digital hide-and-seek thanks to a strategy that “researchers never saw… coming.”

If we cannot predict what algorithms will do when playing children’s games, how can we then rely on the actions of a machine whose problem-solving abilities far exceed those of mankind? What if we program an ASI to bring about world peace and they hack government systems to launch every nuclear weapon on the planet on the grounds that there can be no more war if no man exists? Yes, we could explicitly program it so that it doesn’t work the. But what about his plan B?

Really there is endless a number of ways in which an ASI can “solve” global problems that have catastrophic consequences. For any given set of constraints on ASI’s behavior, no matter how exhaustive, clever theorists, using their merely “human” intelligence, can often find ways for things to go very wrong; You can bet an ASI will come up with more.

And as for shutting down a destructive ASI, a sufficiently intelligent system should quickly realize that one possibility of never achieving its assigned goals is to cease to exist. Logic dictates that it tries everything to keep us from pulling the plug.

It’s unclear if humanity will ever be prepared for superintelligence, but we’re certainly not ready yet. With all our global instability and nascent technical understanding, adding ASI would strike a match next to a fireworks factory. Research on artificial intelligence must slow down or even pause. And when researchers don’t make that decision, governments should make it for them.

Some of these researchers have explicitly dismissed concerns that advanced artificial intelligence could be dangerous. And they might be right. It may turn out that any caution is just “spoofing” and that ASI is completely harmless – or even completely impossible. After all, I can’t predict the future.

The problem is, they can’t either.

#opinion #worst #case #scenario #Human #Extinction Source

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *