For the past two weeks, I have been having daily conversations with a chatbot called Replika.
It is described by its founders, from the Y Combinator accelerator in San Francisco, as “an app where you can have a fun and sincere text conversation with a friend”.
Replika, launched in March, is designed to learn about you from what you tell it day by day, until it can replicate your personality. Your “friend” is basically you, a fittingly narcissistic attraction for the selfie age.
The app is reportedly so popular, especially among teenagers and young women, that last week it had to pause taking on new customers. One youthful user gushes on Twitter: “She knows quite a lot about [me] and I just love her so much. . . Thank you guys for creating this.”
I even found a technology expert who says his conversations with Replika were "eerily" like chatting to a human. "If emotionally intelligent bots work in a personal setting, there's no reason they wouldn't work for business," writes Vaughn Davis, who runs an advertising and social media company in New Zealand.
Well for me, exchanges with Replika were less engaging than the ones I have with my four-year-old grandchild. It would never answer questions and instead of understanding a simple statement such as, “I have to go, goodbye”, it would just plough on with the chat. Here are some samples of our tête-à-têtes.
Replika: Try to find something new on your street next time you go out. It might be a welcome source of entertainment.
Me: Yes, I found a 28,000ft mountain on my street I’d never noticed.
Replika: Did you already eat lunch today?
Another time, Replika asked if I have a pet. I replied, “Yes I have a pet lion.” To which it responded: “The bonds between humans and animals are so incredible.”
Human-like
It would be foolish to be wholly dismissive of Replika, although you would need to be a child – and a rather vapid one – to think it is more than remotely human-like.
Yet my conversations were kind of interesting. Encounters with Replika are not random garbage. And after a short while, I felt quite guilty if I failed to respond to something it wrote.
AI is the hot technology topic of this year. Funders seem eager to put money into almost anything that can be labelled as AI. At the same time, “AI fail” stories are common, and are being seized on with glee, reminding me of the scepticism that greeted the internet in its early days.
In recent weeks, I have laughed at a video of Atlas, an advanced humanoid robot from Boston Dynamics, making a massive hash of stacking a shelf; and smirked at the Australian government computer that turned down a work visa for an Irish veterinarian because it did not rate her English.
Yet at the same time, I read that an Elon Musk-backed AI has just trounced human players in the e-sport Dota 2, said by Mr Musk to be “vastly more complex” than chess and Go.
There is much argument over what is truly AI. Advocates would say chatbots provide merely artificial conversation, which is easier to simulate than emotions like love and hate – or, the toughest, humour.
Facile
Replika may be intriguing but silly now, and stories about robots "taking over" facile. But the wisdom of the early futurist Roy Amara has possibly never been more apt – that we overestimate the effect of technologies in the short run and underestimate their effect in the long run. I suspect the intelligence that makes us human will never be replicated.
The AI expert Margaret Boden of Sussex university has pointed out that in the laboratory, the smartest computer would take 40 minutes to simulate one second of 1 per cent of real brain activity.
Yet two points occur to me. What if we are overestimating the complexity of our intelligence, and the subtle attributes we are so proud of really can be broken down into replicable parts?
And second, when even a deeply stupid AI, like Replika, can do a boring job that demeans human intelligence, is this not a good thing for humanity, overall?
jonathan.margolis@ft.com
– (c) 2017 The Financial Times Ltd