Tesla Motors’ Elon Musk has said that our civilization is dangerously close to encountering AI problems within a “five-year timeframe, 10 years at most.” He made the comment on the website Edge.org shortly before deleting it.
His point was that, sometime soon, we may actually create a form of artificial intelligence that decides to rise up and wipe out the human race, a la Skynet from the Terminator films. At the very least, he believes that it may cause us harm in some way. Musk clearly stated that AI could potentially be “more dangerous than nukes.”
That may seem far-fetched for now, but he believes it’s possible—so much so that he’s actively working to prevent it. If there’s anyone that can keep it from happening, it’s Musk. He’s responsible for Paypal, Tesla Motors, and SpaceX—all of which are incredibly innovative in their respective industries.
But Musk is not just running around blindly through the streets screaming that the sky is falling; he’s actually trying to stop it from becoming a reality. Meanwhile, other experts are calling attention to the less movie-worthy problems posed by AI and automation: things like massive unemployment, to name just one.
How Elon Musk Plans to Save Us from Killer AI
Elon Musk recently donated millions of dollars to the Future of Life Institute—well over $10 million, to be exact. It’s not the money that’s going to save us, though; it’s how the institute plans to use the funds.
They are conducting research that will work toward optimizing AI that is “robust and beneficial,” as opposed to dangerous and harmful. Musk has long supported the idea that future AI technologies must become just as beneficial as they are capable. If the power of AI exceeds its usefulness to the human race, that’s when we may run into significant problems. And it seems the Future of Life Institute is right behind him in that belief.
As for the funds, they will be doled out to 37 teams, all of which will be conducting different research on AI. The 37 teams chosen were narrowed down from an initial total of about 300.
They will be studying how to teach AI to better understand humans, how to align robot and AI interests with humanity’s, and find ways to ensure that AI will always be under our control. That latter point is important, because so long as we maintain control, it’s highly unlikely that AI will be able to “rise up” against us.
If you’re interested in seeing the complete list of those grant winners, the institute has announced them publicly via their official site.
What Are the Real Problems?
Despite their commitment to reigning in AI, the Institute claims that discussions furthering the concept of “killer AI” detracts from the more immediate problems at hand. Max Tegmark, President of the Future of Life Institute, believes that there are more modern issues bubbling to the surface, such as the economic impact of AI replacing modern jobs.
And what might that economic impact look like? To begin with, all kinds of heavy machinery that currently requires a human operator may soon be replaced with autonomous machines. The upside, of course, is that the world might see a dramatic reversal in workplace injury trends, since, if there’s one thing we can say for sure about autonomous technologies, it’s that they’re much less prone to error than their human counterparts. And those errors cost U.S. businesses about $1 billion per week in workers’ compensation payouts.
And then we’ll need to consider the real societal cost of suddenly having millions fewer jobs in the U.S. The taxi and trucking industries are particularly vulnerable; AI-powered autonomous cars are looking less and less like a pipe-dream and more like a very real possibility. With the possibility of massive unemployment looming, many are turning to a revolutionary, but economically viable, solution known as a Universal Basic Income. It’s gaining traction in some of the Nordic countries and even portions of Canada, but we have yet to see an American politician brave enough to broach the topic to Congress.
Regardless, these is the delicate trade-offs that will define future research into automation and artificial intelligence. Says Max Tegmark:
“The danger with the Terminator scenario isn’t that it will happen, but that it distracts from the real issues posed by future AI. We’re staying focused, and the 37 teams supported by today’s grants should help solve such real issues.” That said, part of the plan is to also research preventative measures to keep AI from becoming such a “deadly” problem.
The good news is that this research and work is being done before the birth of a more powerful AI. It makes more sense to at least prepare for the scenario beforehand—that way we don’t run into any problems in the future. Of course, there’s always the question of whether or not this work will be done in time. It’s possible a more advanced form of AI will come into fruition within the next few years. Only time will tell.
–
Image Credit: jlmaral (via Flickr)