Subscriber OnlyCulture

Who is responsible if a self-driving car kills a pedestrian to save the driver?

Unthinkable: Governments put their trust in big tech as they lack expertise

‘If you look at the AI system Tesla uses for their self-driving cars, it’s pretty much inscrutable even to the people who built it.’ Photograph: Getty
‘If you look at the AI system Tesla uses for their self-driving cars, it’s pretty much inscrutable even to the people who built it.’ Photograph: Getty

Here’s a scary thought: as the march of the robots continues globally, and as artificial intelligence increasingly displaces human will, your last line of defence is the men and women of Dáil Éireann.

No slight intended on our current crop of TDs, but politicians globally are ill-prepared to address what is probably the most immediate existential threat to humanity outside of climate change and nuclear war.

Last July the Government published its first AI strategy for Ireland, AI: Here for Good, produced by the Department of Enterprise, Trade and Employment. It is predictably bullish about the economic benefits of advanced machine learning systems while short of detail on how they would be regulated.

As AI becomes increasingly pervasive and powerful, "we will find ourselves more and more often in the position of the 'sorcerer's apprentice' ", writes Brian Christian in his urgent and clear-sighted book The Alignment Problem: How Can Artificial Intelligence Learn Human Values?

READ MORE

“We conjure a force, autonomous but totally compliant, give it a set of instructions, then scramble like mad to stop it once we realise our instructions are imprecise or incomplete – lest we get, in some clever, horrible way, precisely what we asked for.”

Since he published the book, he has been contacted by some US senators asking what they can do prevent what he calls “catastrophic divergence”, while research foundation Open Philanthropy has paid for a copy to be sent to every parliamentarian in the UK.

Were Leinster House to similarly benefit, members would be tickled to discover a section on “TD learning”. This is where trends in newly emerging data, or “temporal differences”, can be used to make more accurate predictions. And, while this sounds like the sort of job you’d like your elected representative to do, he or she is no match for a supercomputer.

This underlines what Christian sees as the potential of AI. As an author and academic who has gained access to the Silicon Valley’s inner sanctum but who also seeks out dissenting voices, Christian says his hope for the new book is that it “can give people, who do not necessary consider themselves technical, the kind of detail to understand what can be done”.

What are the potential benefits of AI?
Brian Christian: "I think the promise is very real, cars being one example. Something like 30,000 Americans die every year in traffic accidents [the figure last year was 38,680] … Self-driving systems don't have to be perfect, they just need to be better than that.

“Also when you think about something like medicine: there is an aspiration in a lot of machine learning that you can distil the wisdom of the best human experts and then democratise it ... There are things like that which have very real potential but there are so many ways these things can go wrong.”

A concern about AI is that it makes decision-making unaccountable. Who is to be held responsible, for example, if a self-driving car kills a pedestrian to save the driver?
"I think a lot of these legal questions about liability are being worked out in real time. To my mind, it seems to highlight an absent regulatory framework.

“If there is a model of car from a certain year that has a faulty brake cable, no one says: ‘Let’s find the engineer who designed that cable on a computer.’ I think we understand there is a systems-level approach that’s needed in terms of: What are the government standards? How are those standards enforced?

“Those are the sort of questions we should be asking about AI but we don’t really have the regulatory bodies.

“It’s not clear whether we need a separate agency for every application of AI – a medical thing that deals with medical AI, a highway safety thing for self-driving cars, etc. Or do you just want one central organisation that is responsible for machine learning in any arena?

“I don’t know the answer but that’s the kind of thing we need – institutional oversight.”

Is it worrying that governments currently lack expertise in AI and that this is a contributing to a hands-off approach to regulation?
"Honestly, that is one of my hopes for the book [to counter that] … There are many roles in society where people are suddenly having to have a working knowledge of machine learning.

“Maybe you are a judge, sitting on a bench for 25 years, and suddenly there is a law that’s passed mandating the use of these algorithmic risk assessment scores, and you don’t really know what they mean: It says ‘8 out of 10’ risk – should you trust it?

“Or you’re a doctor and you send your X-ray off to some machine-learning thing that tells you there is X probability of cancer: what does that mean?

“I think there are many, many people throughout society who are suddenly being forced to work hand in hand with systems like this. Having some working knowledge of machine-learning systems – how they work, when they fail, how you can prevent those failings from happening – for better or worse this is part of the core curriculum of being a citizen in the 21st century.”

Making AI systems both transparent and capable of human understanding is a key problem. Can it be solved?
"There is a notion [in tech companies] that you have to trade off performance for interpretability but there is a generation of scientists that are saying: No, if we think hard on this, and be creative, we can find breakthroughs that allow us to have equivalent performance without using something we don't understand.

“There are many domains in which we are pretty much there. On the other hand if you look at state-of-the-art models being used for natural language processing … or, if you look at the AI system Tesla uses for their self-driving cars, it’s pretty much inscrutable even to the people who built it. So we just have to reassure ourselves of its properties by looking at its behaviour.

“It’s like looking at someone’s brain on a brain-scanner and saying: ‘Is this person trustworthy or not?’ You have no idea. You just have to look at how they behave. That’s our relationship to a lot of these cutting-edge systems.”

When it comes to transparency of algorithms, should commercial sensitivity trump the public's right to know?
"That's ultimately a political question. It's up to the political system to determine whether they want X per cent more GDP growth or these other, more intangible things, like a sense that their data is private, or a sense that they understand the systems that affect their lives.

“That’s a pretty fundamental question not just in AI but commerce broadly.

“The area where I’m most sympathetic to corporate secrecy, if you will, is things like anti-spam or anti-abuse algorithms. You can imagine Google or Twitter saying we have a top secret system that determines whether your account is a malicious, fake account and if we revealed the nature of that model people would instantly be able to circumvent it.

“I’m reasonably sympathetic to that but it’s a pretty niche case. Most cases are not like that.”

ASK A SAGE

Question How much freedom do we really have in a technologically advanced society?

Herbert Marcuse replies "By virtue of the way it has organised its technological base, contemporary industrial society tends to be totalitarian … It precludes the emergence of an effective opposition against the whole."