homehome Home chatchat Notifications


Stephen Hawking: You Should Support Wealth Redistribution

In July, Professor Stephen Hawking took the time to answer questions posed by Reddit users in an AMA (Ask Me Antyhing), addressing one of the less discussed aspect of increasing technology and robotization: the distribution of wealth.

Mihai Andrei
November 20, 2015 @ 3:04 am

share Share

In July, Professor Stephen Hawking took the time to answer questions posed by Reddit users in an AMA (Ask Me Antyhing), addressing one of the less discussed aspect of increasing technology and robotization: the distribution of wealth. Here’s the question, which is really interesting, and Hawking’s answer:

Q: “Have you thought about the possibility of technological unemployment, where we develop automated processes that ultimately cause large unemployment by performing jobs faster and/or cheaper than people can perform them? Some compare this thought to the thoughts of the Luddites, whose revolt was caused in part by perceived technological unemployment over 100 years ago. In particular, do you foresee a world where people work less because so much work is automated? Do you think people will always either find work or manufacture more work to be done?”

Answer:

A: If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.

You hear people and the media talk a lot about a potential war with robots or emerging conflicts with technology, but in my view, this is a far more pressing point. If we reach a point where machines produce all, or most of our needs, then we won’t really need so many people to work, and without work, in our current system, a huge income inequality gap will be created (or rather accentuated). With this in mind, we need to re-think our way of distributing wealth; the robots won’t be the enemy, we will.

A potential solution is a system called “basic income”, which basically revolves around the idea of offering people a sufficient sum of money every month for them to live off, whether they work or not. You can read more about it here. Several cities in the Netherlands are already starting to implement it, and Finland is considering it on a national level. There are also other ideas, but one things seems certain: the increasing use of robots and machines can either create a world where wealth is distributed harmoniously among the population, or a divided world, with very rich and very poor.

As for Professor Hawking, his entire AMA (which you can read on Reddit) was very insightful, here are another couple of Q&As related to Artificial Intelligence:

Image via Flickr.

Q: Professor Hawking- Whenever I teach AI, Machine Learning, or Intelligent Robotics, my class and I end up having what I call “The Terminator Conversation.” My point in this conversation is that the dangers from AI are overblown by media and non-understanding news, and the real danger is the same danger in any complex, less-than-fully-understood code: edge case unpredictability. In my opinion, this is different from “dangerous AI” as most people perceive it, in that the software has no motives, no sentience, and no evil morality, and is merely (ruthlessly) trying to optimize a function that we ourselves wrote and designed. Your viewpoints (and Elon Musk’s) are often presented by the media as a belief in “evil AI,” though of course that’s not what your signed letter says. Students that are aware of these reports challenge my view, and we always end up having a pretty enjoyable conversation. How would you represent your own beliefs to my class? Are our viewpoints reconcilable? Do you think my habit of discounting the layperson Terminator-style “evil AI” is naive? And finally, what morals do you think I should be reinforcing to my students interested in AI?

Answer:

A: You’re right: media often misrepresent what is actually said. The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants. Please encourage your students to think not only about how to create AI, but also about how to ensure its beneficial use.

Question:

Q: Hello Doctor Hawking, thank you for doing this AMA. I am a student who has recently graduated with a degree in Artificial Intelligence and Cognitive Science. Having studied A.I., I have seen first hand the ethical issues we are having to deal with today concerning how quickly machines can learn the personal features and behaviours of people, as well as being able to identify them at frightening speeds. However, the idea of a “conscious” or actual intelligent system which could pose an existential threat to humans still seems very foreign to me, and does not seem to be something we are even close to cracking from a neurological and computational standpoint. What I wanted to ask was, in your message aimed at warning us about the threat of intelligent machines, are you talking about current developments and breakthroughs (in areas such as machine learning), or are you trying to say we should be preparing early for what will inevitably come in the distant future?

Answer:

The latter. There’s no consensus among AI researchers about how long it will take to build human-level AI and beyond, so please don’t trust anyone who claims to know for sure that it will happen in your lifetime or that it won’t happen in your lifetime. When it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right. We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence. It might take decades to figure out how to do this, so let’s start researching this today rather than the night before the first strong AI is switched on.

share Share

First Ice-Free Day in the Arctic Could Happen by 2027, Study Warns

Climate change is heating up faster than we thought.

Big oil and chemical companies teamed up to "end plastic waste". They produced 1,000 times more than they cleaned up

"The Alliance to End Plastic Waste promised a $1.5 billion solution to plastic pollution. Five years later, it’s cleaned up less plastic than its members produce in two days.

Cars Are Unwittingly Killing Millions of Bees Every Day, Scientists Reveal

Apart from pollution, pesticides, and deforestation, cars are also now found to be killing bees in large numbers.

Growing crops in the dark with "electro-agriculture" can revolutionize food production and free up over 90 percent of farmlands

In the future, photosynthesis could be replaced with electro-agriculture, a process that is four times more efficient and may do wonders for food security.

Could Spraying Diamonds into the Sky Be the Key to Cooling the Planet?

Nothing is more precious than our planet, and we must cool it fast. Scientists say this can be done by decorating the sky with diamonds.

Scientists bioengineer mussel-inspired bacteria that sticks to and break down plastic waste

The modified bacteria clings 400 times better to plastic than normal bacteria.

This New Micronuclear Battery Could Last For Decades

Nuclear batteries offer a lifespan that lithium batteries can't match. But don't expect them powering consumer devices anytime soon.

North Korea wants you to look at its nuclear weapon facilities

North Korea has not one but two facilities for processing weapon-grade uranium.

AI is becoming a bigger and bigger problem for the climate. Can "digital sobriety" help?

Artificial intelligence might not take your job, but it can use up all your water and electricity.

We only have one last chance to save the tigers

“A tiger is a large-hearted gentleman with boundless courage and when he is exterminated – as exterminated he will be unless public opinion rallies to his support – India will be the poorer by having lost the finest of her fauna.” So said Jim Corbett, a man whose fate was bound to that of the […]