Artificial Intelligence. To most of us that brings up images and short clips from movies where AI dominates Earth and enslaves us poor humans. Put away those connotations for a moment. AI in its purest sense, where programs evolve and self-improve has been very interesting. Google recently showcased an interesting program; they plugged it into a game on the PS4, and in a matter of hours, the program had taught itself to play the game, and a few hours later could play it better than any human. Although this is slightly frightening, it shows how powerful technology is getting.
A topic that is ever-growing in presence in our news is driverless cars. Most big tech companies have caught on; Apple recently hired a leader in AI, probably for its own car – and of course, Google has been running thousands of kilometres of testing for its own driverless car. This raises a lot of machine ethics issues:
Machine ethics
Suppose a self-driving car gets itself into a catastrophic situation where it can either hit into a group of ten people, or crash into a wall – killing the driver in the process. What now? Or how should it trade off the small probability of a human injury with the near-certain probability of damaging very costly materials? The list goes on.
Legal issues
Suppose self-driving cars manage to cut the number of car accidents in half. Great news right? Now, rather than (for example) 40,000 accidents we now have 20,000. However the car firm faces itself with 20,000 lawsuits. Should legal questions about AI be treated differently to current laws that shape our actions?
Autonomous weapons
What about extremely powerful weapons? Should we ever entrust such weapons to AI? We would have to implement and hardwire certain humanitarian laws. In my opinion a program in control of weapons may entail fewer rash decisions when it comes to warfare, if everything is calculated. However, giving a program human values and telling it how to act is harder said than done.
Another issue that we must address and deal with is the AI’s ‘will’ for self-improvement. This feature is what makes AI so powerful, with its ability to make itself better at carrying out its task and being more efficient. However, this raises a few questions. A robot wanting to improve its ability to achieve a set of human goals may upgrade its hardware, software, generate a better world model etc. In other words, would it develop a sense of self-preservation? Unlimited resource acquisition? You can see where I’m heading with this. Not to mention; how can we guarantee AI keeps its goals when it ‘self-evolves’?
Unarguably, we will want to instil some values – in particular ethical values (e.g. kindness, mercy). How rigidly should these AI adhere to these ethical values? And what even are the ethical values we want to give them? It is clear that as a planet we have a multitude of cultures and beliefs. Who should get to decide that, and when?
And what if the AI realises that its world model turns out to be quite different from reality? Or that it is for example given the task of eradicating a certain disease from a country. After that has been achieved, will it be able to extrapolate or re-define them somehow? For more information on this, take a look at the case that has been called an “Ontological crisis”.
Although all of these issues seem chilling, and yes – they have parallels with some Hollywood films, the potential benefits are mind-blowing, and make it possible to envision a world without disease or poverty. Imagine if a super-computer loaded with a friendly AI managed to eradicate Ebola, or remove disease in the poorest areas of the world by thinking of an ingenious solution? As far as that may be away, we should start thinking of an answer to these questions, which will have to come from us, society – and not from companies and industry.