Could robots go beyond taking our jobs to running our world?

Fear of superintelligent AI has prompted new guidelines that propose close regulation

Christian Bale stars as John Connor in ‘Terminator Salvation’. A recent draft EU report suggested all robots should have an integrated “kill switch”
Christian Bale stars as John Connor in ‘Terminator Salvation’. A recent draft EU report suggested all robots should have an integrated “kill switch”

On January 15th 2015, the Future of Life Institute announced it had received $10 million in funding from world-famous technology entrepreneur Elon Musk, the man behind Tesla and SpaceX. Actors Alan Alda and Morgan Freeman were listed on the advisory board of this little-known organisation, barely a year old, whose claimed purpose was "mitigation of existential risk". Existential risk? What were they so scared of?

It emerged that the Future of Life Institute was a research and outreach organisation hoping to save humanity from what it claims could be our undoing: runaway artificial intelligence and robotics that pose a greater threat to humankind than merely stealing all our jobs; conscious AI entities with cognitive ability that far surpasses the human brain and in possession, quite literally, of a mind of their own.

Musk himself has stated that “we need to be very careful with artificial intelligence”, adding that he is “increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish”.

Three laws of robotics

His donated millions have since given rise to a new set of guidelines, the real-life version of the “three laws of robotics” as devised by science fiction author

READ MORE

Isaac Asimov

. Known as the Asilomar AI Principles, these 23 new guidelines are organised under three headings: research issues; ethics and values; and longer-term issues.

Some might say that Asimov’s three laws still have relevance. If robots with human-level AI come into being, it makes sense that they adhere to these laws:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given it by human beings, except where such orders would conflict with the first law.

3. A robot must protect its own existence as long as such protection does not conflict with the first or second law.

However, these were written in 1942 when the promise (or threat) of intelligent robots was the stuff of science fiction. Even Asimov himself admitted that his robot code of conduct was created with entertainment in mind: “[They] are sufficiently ambiguous in order to write strange stories where robots don’t always behave as expected,” he said.

The Asilomar AI principles, which are endorsed by high-profile AI and robotics researchers including Ray Kurzweil and influential figures like Stephen Hawking, are not so ambiguous. They are, however, only pointers to future developments that can only be reached with co-operation from government and all levels of society.

It is recommended that funding for AI research should ensure these technologies are developed for beneficial purposes, while addressing the thorny questions of hacking, malfunctions, and what legal and ethical status AI entities should have.

‘Black box’

There are, however, many well-elucidated principles such as “failure transparency”: if an AI system causes harm, it should be possible to ascertain why. This is what Prof

Alan Winfield

, professor of robot ethics at the University of the West of England, Bristol, refers to as a “black box” for robots. Much like the flight recorder of an aircraft, it should be possible to recover a robot’s black box to see what went wrong. A related recommendation is made in a recent draft EU report to the Commission on Civil Law Rules on Robotics: all robots should have an integrated “kill switch”.

While the idea of being able to halt or manually override a robot’s actions makes sense, this EU report also goes on to recommend the creation of an official robot register that all robotics manufacturers would be legally required to comply with, while also considering the idea of giving robots legal status as “electronic persons” , which would also make them legally responsible for their behaviour.

The European Robotics Association, EUnited Robotics, is not impressed. It claims the report "proposes to find solutions for problems that in many cases do not yet exist – and might not ever exist". Aside from the organisation's educated prediction that they don't expect a degree of autonomy in the near or even medium term future where humans or organisations could shirk accountability for their robot's actions, they add that "the technological possibilities of autonomy should not be confused with machines developing an own consciousness. Therefore, we do not see the need for creating a new legal status of an 'electronic person'."

In terms of conflating autonomous actions with consciousness, the organisation has a point. One problem at the core of developing a universal robot code of conduct is that we are talking about a spectrum of artificial intelligence that ranges from the “dumb” robots that have existed for decades and are working away in industrial jobs assembling cars or icing cakes, to the high-level artificially intelligent robots like Boston Scientific’s SpotMini dog, who helps with the housework, all the way up to theoretical AI that is not only capable of making independent decisions but is conscious and has reached human-level intelligence or beyond.

Control world

What Musk, Hawking, Kurzweil and others are really worried about is the emergence of a superintelligent agent that may have the power – and the desire – to take control of the world, starting with local computer networks and working its way up to entire governments, financial and economic systems. While this seems a little farfetched, University of Oxford philosopher

Nick Bostrum

has written a book on the subject, dedicating an entire chapter to an AI takeover scenario, minus the science fiction, drama and hysterics.

It goes a little something like this: a “seed” AI is created when scientists finally develop a system that can improve on its own intelligence. This agent becomes better at AI design than the human programmers and recursively self-improves to the point of becoming superintelligent. For reasons best known to the AI it wants to take over the world, starting with a covert preparation phase (they were designed by humans so why wouldn’t they be devious?) and finally a strike that eliminates the human species through self-replicating biotech or nanotech weapons.

"Only those with Pollyanna-like views of the future would resist our call to at least plan for the possibility that this dark outcome may unfold," warned cognitive science researchers Selmer Bringsjord and Joshua Taylor in the 2012 MIT-published book Robot Ethics.

But how likely is invention of superintelligent AI, or what is called the technological singularity? Predictions abound. Even Alan Turing, the father of AI, predicted a 30 per cent pass rate on the Turing test by the year 2000. "Computer scientists, philosophers and journalists have never been shy to offer their own definite prognostics, claiming AI to be impossible, just around the corner or anything in between," say Stuart Armstrong and colleagues at the Future of Humanity Institute, University of Oxford.

Surprisingly, expert and non-expert (like this journalist) predictions are alike in both their failure to get it right in the past and the average tendency to predict that superintelligent AI is about 15-25 years away from whatever year the prediction is made in. The human tendency is to guess that a dazzling or frightening future is just around the corner.

Perhaps we should return to Asimov for words of wisdom. After all, he accurately predicted: “Robots will neither be common nor very good in 2014, but they will be in existence.”