One response to the call by experts in robotics and artificial intelligence for a ban on “killer robots” (lethal autonomous weapons systems or Laws in the language of international treaties) is to say: shouldn’t you have thought about that sooner?
Figures such as Tesla's chief executive Elon Musk, are among the 116 specialists calling for the ban. "We do not have long to act," they say. "Once this Pandora's box is opened, it will be hard to close." But such systems are arguably already here, such as the "unmanned combat air vehicle" Taranis developed by BAE and others, or the autonomous SGR-A1 sentry gun made by Samsung and deployed along the South Korean border. Autonomous tanks are in the works, while human control of lethal drones is becoming just a matter of degree.
Yet killer robots have been with us in spirit for as long as robots themselves. Karel Capek's 1920 play RUR (Rossum's Universal Robots) gave us the word (meaning "labourer" in Czech). His humanoid robots, made by the eponymous company for industrial work, rebel and slaughter the human race. They've been doing it ever since, from Cybermen to the Terminator. Robot narratives rarely end well.
It's hard even to think about the issues raised by Musk and his co-signatories without a robot apocalypse looming in the background. Even if the end of humanity isn't at stake, we just know that one of these machines is going to malfunction with the messy consequences of Omni Consumer Product's police droid in Robocop.
Such allusions could seem to make light of a deadly serious subject. OK, so a robot Armageddon might not be exactly frivolous, but these stories, for all that they draw on deep-seated human fears, are ultimately entertainment. It’s all too easy, though, for a debate like this to settle into the polarisation of good and bad technologies that science-fiction movies can encourage, with the attendant implication that, so long as we avoid the really bad ones, all will be well.
The issues – as specialists on Laws doubtless recognise – are more complex. On the one hand, they concern the wider and increasingly pressing matter of robot ethics; on the other hand they are about the very nature of modern war and its commodification.
How do we make autonomous technological systems safe and ethical? Avoiding robot-inflicted harm to humans was the problem explored in Isaac Asimov's I, Robot, a collection of short stories so seminal that Asimov's three laws of robotics are sometimes discussed now almost as if they have the force of Isaac Newton's three laws of motion. The irony is that Asimov's stories were largely about how such well-motivated laws could be undermined by circumstances.
In any event, the ethical issues can’t easily be formulated as one-size-fits-all principles. Historian Yuval Noah Harari has pointed out that driverless vehicles will need some principles for deciding how to act when faced with an unavoidable and possibly lethal collision: who does the robot try to save? Perhaps, Harari says, we will be offered two models: the egoist (which prioritises the driver) and altruist (which puts others first).
Human instincts
There are shades of science-fictional preconceptions in a 2012 report on killer robots by Human Rights Watch. “Distinguishing between a fearful civilian and a threatening enemy combatant requires a soldier to understand the intentions behind a human’s actions, something a robot could not do,” it says. Furthermore, “robots would not be restrained by human emotions and the capacity for compassion, which can provide an important check on the killing of civilians”.
But the first claim is a statement of faith – mightn’t a robot make a better assessment using biometrics than a frightened soldier using instincts? As for the second, one feels: sure, sometimes. Other times, humans in war zones wantonly rape and massacre.
This is not to argue against the report’s horror at autonomous robot soldiers, which I, for one, share. Rather, it brings us back to the key question, which is not about technology but warfare.
Already our sensibilities about the ethics of war are arbitrary. “The use of fully autonomous weapons raises serious questions of accountability, which would erode another established tool for civilian protection,” says Human Rights Watch, and it is a fair point but impossible to place in any consistent ethical framework while nuclear weapons are internationally legal. Besides, there’s a continuum between drone war, soldier enhancement technologies and Laws that can’t be broken down into “man versus machine”.
Nature of war
This question of automated military technologies is intimately linked to the changing nature of war itself, which, in an age of terrorism and insurgency, no longer has a start or end, battlefields or armies: as American strategic analyst Anthony Cordesman puts it: "One of the lessons of modern war is that war can no longer be called war." However we deal with that, it's not going to look like the D-day landings.
Warfare has always used the most advanced technologies available; “killer robots” are no different. Pandora’s box was opened with the invention of steel smelting if not earlier (and it was almost never a woman who did the opening). And you can be sure someone made a profit from it.
By all means let’s try to curb our worst impulses to beat ploughshares into swords, but telling an international arms trade that they can’t make killer robots is like telling soft-drinks manufacturers that they can’t make orangeade.
- Guardian News Service