How many people wish to destroy all human life? I’d guess the answer is at least six digits, maybe seven. In other words, hundreds of thousands, if not millions.
(If 0.1% of humans wish to kill humanity, that’s 8 million people. If it’s 0.01%, then 800,000 people.)
In the past few years, I can recall several stories of airline pilots committing suicide and taking an entire commercial airliner with them. Depressed people seem to occasionally have an urge to spread their own suffering onto the rest of humanity.
Are airline pilots typical human beings? No, they are screened in an attempt to filter out people with mental problems. If I sit on the subway and scan people’s faces, the average person seems less mentally stable than the average pilot I observe when exiting a plane.
Obviously this is all guesswork, and I don’t think it matters very much whether the number of potential world killers is 800,000 or 8 million. It only takes one.
The real question is whether ordinary people will ever gain the power to destroy the world.
I must confess that I don’t understand the alignment debate. I have no opinion on whether AIs will be capable of pursuing their own goals, which might be unaligned with the best interest of society. My fear is not unaligned AIs, it’s AIs that are aligned with depressed people.
Perhaps there’s no reason for me to have this concern. But if I’m wrong, it’s not because no one would want to destroy the world, it’s because technology with never give individuals the power to destroy the world, and governments have no interest in destroying the world. That’s why I’d be wrong to have this concern.
In any case, I don’t understand why I keep seeing one article after another on what AIs might or might not decide to do, and very little on what they would be capable of doing if used by one of those hundreds of thousands of people that wish to destroy all human life.
If you have trouble imagining what I am talking about, consider a scientist that becomes convinced that humans are destroying the animal kingdom, and then becomes severely depressed. The Unibomber had very weak technology at his disposal. But in the future? Engineer a highly contagious and deadly virus with (like HIV) a long incubation period. Perhaps it’s already happened in a virus research lab.
Lots of people hope that future AIs will be aligned with humans. I fear that future AIs will be aligned with humans. It’s not AI that I distrust, it’s humans.
PS. When you pass a certain age, you become aware that they are many questions that you’ll never see answered. At age 67, I won’t live long enough to see how the world addresses global warming. I probably won’t live along enough to see if they ever complete the high speed rail project in California. I probably won’t live long enough to see which worldwide trend follows the current wave of nationalist authoritarianism. I won’t live to see humans go to Mars. I won’t live to see if fusion energy pans out. I won’t live long enough to see if we can solve the business cycle with NGDPLT.
If you think of long sports careers like LeBron James, the stars entering the NBA today will be the last generation I’ll follow to the end. I’ll never again see a Packer as talented as Rodgers or a Buck as talented as Giannis.
When I was younger, there was a sense of time being limitless. I felt that eventually I would see answers to these sorts of questions. Now I realize that I’ll never find out the endgame for AI. I don’t even have strong views as to what’s likely to happen, other than that the future will be far different and far stranger than we can imagine.