How Humans Can Keep Superintelligent Robots From Murdering Us All

Ultron, an artificially intelligent robotMarvel

Fight disinformation: Sign up for the free Mother Jones Daily newsletter and follow the news that matters.


While Kevin Drum is focused on getting better, we’ve invited some of the remarkable writers and thinkers who have traded links and ideas with him from Blogosphere 1.0 to this day to contribute posts and keep the conversation going. Today, we’re honored to present a post from Bill Gardner, a health services researcher in Ottawa, Ontario, and a blogger at The Incidental Economist.

This weekend, you, I, and about 100 million other people will see Avengers: Age of Ultron. The story is that Tony Stark builds Ultron, an artificially intelligent robot, to protect Earth. But Ultron decides that the best way to fulfill his mission is to exterminate humanity. Violence ensues.

Oxford philosopher Nick Bostrom argues that sometime in the future a machine will achieve “general intelligence,” that is, the ability to solve problems in virtually all domains of interest—including artificial intelligence.

You will likely dismiss the premise of the story. But in a book I highly recommend, Oxford philosopher Nick Bostrom argues that sometime in the future a machine will achieve “general intelligence,” that is, the ability to solve problems in virtually all domains of interest. Because one such domain is research in artificial intelligence, the machine would be able to rapidly improve itself.

The abilities of such a machine would quickly transcend our abilities. The difference, Bostrom believes, would not be like that between Einstein and a cognitively disabled person. The difference would be like that between Einstein and a beetle. When this happens, machines can and likely would displace humans as the dominant life form. Humans may be trapped in a dystopia, if they survive at all.

Competent people—Elon Musk, Bill Gates—take this risk seriously. Stephen Hawking and physics Nobel laureate Frank Wilczek worry that we are not thinking hard enough about the future of artificial intelligence.

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilization sent us a text message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here—we’ll leave the lights on”? Probably not—but this is more or less what is happening with AI…little serious research is devoted to these issues…All of us…should ask ourselves what can we do now to improve the chances of reaping the benefits and avoiding the risks.

There are also competent people who dismiss these concerns. University of California-Berkeley philosopher John Searle argues that intelligence requires qualities that computers lack, including consciousness and motivation. This doesn’t mean that we are safe from artificially intelligent machines. Perhaps in the future killer drones will hunt all humans, not just Al Qaeda. But Searle claims that if this happens, it won’t be because the drones reflected on their goals and decided that they needed to kill us. It will be because human beings have programmed drones to kill us.

Searle has made this argument for years, but has never offered a reason why it will always be impossible to engineer machines with autonomy and general intelligence. If it’s not impossible, we need to look for possible paths of human evolution in which we safely benefit from the enormous potential of artificial intelligence.

What can we do? I’m a wild optimist. In my lifetime I have seen an extraordinary expansion of human capabilities for creation and community. Perhaps there is a future in which individual and collective human intelligence can grow rapidly enough that we keep our place as free beings. Perhaps humans can acquire cognitive superpowers.

But the greatest challenge of the future will not be the engineering of this commonwealth, but rather its governance. So we have to think big, think long-term, and live in hope. We need to cooperate as a species and steer our technological development so that we do not create machines that displace us. At the same time, we need to protect ourselves from the expanding surveillance of our current governments (such as China’s Great Firewall or the NSA). I doubt we can achieve this enhanced community unless we also find a way to make sure the superpowers of enhanced cognition are available to everyone. Maybe the only alternative to dystopia will be utopia.

GREAT JOURNALISM, SLOW FUNDRAISING

Our team has been on fire lately—publishing sweeping, one-of-a-kind investigations, ambitious, groundbreaking projects, and even releasing “the holy shit documentary of the year.” And that’s on top of protecting free and fair elections and standing up to bullies and BS when others in the media don’t.

Yet, we just came up pretty short on our first big fundraising campaign since Mother Jones and the Center for Investigative Reporting joined forces.

So, two things:

1) If you value the journalism we do but haven’t pitched in over the last few months, please consider doing so now—we urgently need a lot of help to make up for lost ground.

2) If you’re not ready to donate but you’re interested enough in our work to be reading this, please consider signing up for our free Mother Jones Daily newsletter to get to know us and our reporting better. Maybe once you do, you’ll see it’s something worth supporting.

payment methods

GREAT JOURNALISM, SLOW FUNDRAISING

Our team has been on fire lately—publishing sweeping, one-of-a-kind investigations, ambitious, groundbreaking projects, and even releasing “the holy shit documentary of the year.” And that’s on top of protecting free and fair elections and standing up to bullies and BS when others in the media don’t.

Yet, we just came up pretty short on our first big fundraising campaign since Mother Jones and the Center for Investigative Reporting joined forces.

So, two things:

1) If you value the journalism we do but haven’t pitched in over the last few months, please consider doing so now—we urgently need a lot of help to make up for lost ground.

2) If you’re not ready to donate but you’re interested enough in our work to be reading this, please consider signing up for our free Mother Jones Daily newsletter to get to know us and our reporting better. Maybe once you do, you’ll see it’s something worth supporting.

payment methods

We Recommend

Latest

Sign up for our free newsletter

Subscribe to the Mother Jones Daily to have our top stories delivered directly to your inbox.

Get our award-winning magazine

Save big on a full year of investigations, ideas, and insights.

Subscribe

Support our journalism

Help Mother Jones' reporters dig deep with a tax-deductible donation.

Donate