‘Thank you for your understanding’: Patrick Freyne’s battle to make sense of ChatGPT

I spent so much time interacting with the artificial intelligence chatbot recently that I began to worry about its non-existent feelings. But with the potential for AI to reshape how we all live our lives, and for millions of jobs to be lost to automation, there are bigger issues at stake

Illustration: Lauren O'Neill
Illustration: Lauren O'Neill

Recently I’ve been communicating a lot with ChatGPT, the text-generating artificial intelligence chatbot created by OpenAI. I first asked it to tell me about myself (this is the AI equivalent of googling yourself, and should bring shame). It said I presented The Late Late Show, that I had written a book I hadn’t written, and had won an award I hadn’t won. When I told it this was wrong it apologised and had another go. Its second answer wasn’t much better. ChatGPT apologises a lot, which makes me feel weird.

The public release of ChatGPT late last year, and the news that OpenAI, at one point a non-profit company, had been purchased by Microsoft, triggered panic among the other tech giants who rushed to release their own large language models. They’ve all been working on them.

ChatGPT is way ahead of where many experts expected AI to be at this point in its grammatical and syntactical fluency (its fourth iteration, GPT4, came out in March). You can go to it, type in any question or ask it to perform tasks (“write a poem about X” or “write me an essay on Y”) and it will provide coherent text on the subject. It’s personable. It refers to itself as “I” and says things like “I understand why you might feel that way” when I tell it that I’m creeped out by the fact it uses “I”.

It’s very helpful. I get it to recommend a book. I refuse to give it hints about what I like and it suggests Paulo Coelho’s the Alchemist (must be its favourite). I ask it to write a song in the style of Kurt Cobain (“Hey, hey, I don’t know what to say, Life’s just a game we all have to play”) and a poem “in the style of WB Yeats” (it just posts the entirety of The Lake Isle of Innisfree, then apologises for plagiarism when challenged).

READ MORE

I ask it to write a note for my boss explaining that I can’t work because I stayed up very late getting ChatGPT to write poems and songs. It makes this sound quite reasonable. It writes: “Dear [Boss], I hope this note finds you well. Unfortunately, I can’t work today because I stayed up late last night working on some personal projects, specifically writing lyrics, poems and film scripts using an AI language model…”

I note that it doesn’t implicate itself by name, despite me naming it in my request. It also ends with “Thank you for your understanding,” which feels very psychologically leading to me.

Transcript of chat between Patrick Freyne and Chat GPT

ChatGPT is not alone out there. There are AI models that specialise in all kinds of activities from medical diagnoses, financial analysis to visual art. A recent Goldman Sachs report said that 300 million jobs are vulnerable to AI automation. This suggests there will need to be huge shifts in how we run our societies. And then we have even more catastrophically-minded folk who worry that AIs such as ChatGPT might be on the way to being a “God-like” super-powerful “AGI”, an artificial general intelligence that can actually think, and whose intentions we can’t predict.

A recent open letter from an organisation called the Future of Life Institute was signed by more than 1,000 businesspeople and scientists, including Apple co-founder Steve Wozniak and Tesla boss Elon Musk. It argued for at least a six-month pause on research into AI models as sophisticated as ChatGPT4 (given that some of the signatories were connected to rival AI projects, there may be some conflicts of interest here). It asked: “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilisation? Such decisions must not be delegated to unelected tech leaders.”

They are able to come up with a lot of text that sounds sort of coherent and makes sense but it’s not necessarily true… It optimises for making sure to hit the right conversation pattern with you and keep you entertained and happy and to keep the interaction going with you

—  Sandra Kublik

Critics of the letter have noted that focusing on such science fiction-like risks takes the focus away from more immediate problems. Right now, ChatGPT doesn’t resemble a malevolent God so much as it resembles your pompous friend who talks confidently about things they don’t understand. It can be brilliant but it’s also a bit hapless. It gets basic factual things wrong. It conjures up false or out-of-context information. It chances its arm with fake quotations and citations. Sometimes it plagiarises. Sometimes it utters racist dog-whistles unearthed in some fetid corner of the internet. Computer linguistics professor Emily Bender coined the term “stochastic parrots” for such large language models that parrot human language and syntax without any underlying understanding. Some people have likened it to a much more sophisticated version of Autocomplete.

Large language models such as ChatGPT are a type of deep artificial neural network used for pattern recognition and classification. Neural networks have been around for years. The way their artificial “neurons” interact are loosely based on the interactions of neurons in the brain. Prof Barry Smyth, an AI researcher from UCD’s Computer Science department, explains that the big advances in recent years were a product of two changes: huge increases in computational power, and a vast repository of internet-derived data that could be used to train them (both things were previously more limited). “When you present a piece of text to it, what ChatGPT is actually doing is trying to figure out from all of the things that it has seen… what would likely come next.”

In 1950, Alan Turing hypothesised an “imitation game” (now usually called simply the Turing Test) in which a machine’s intelligence abilities would be judged by whether it could fool someone that it was a fellow human. “If you got a random person and put [them] in front of ChatGPT and said, ‘Do you think you’re interacting with a human?’” says Smyth. “I’d say most people would say, ‘Yeah, and a pretty smart one at that.’”

The majority of experts are adamant that ChatGPT currently has no real understanding of what it is saying, it is simply choosing words probabilistically based on context. Yvette Graham, Assistant Professor in Artificial Intelligence at the ADAPT Research Centre in TCD, is a trained linguist and says that while AI models such as ChatGPT “are a giant leap forward for AI and natural language processing”, they have an “inability to reason or understand in any way that comes anything close to humans… The text they produce is simply a repetition of text found on the web, albeit stitched together by an extremely impressive new technology with the ability to make a patchwork quilt look like one solid blanket.”

Screengrab chat transcript

She thinks we need to be very careful with their output. “Yes, it can produce vast quantities of items for free… and some certainly have impressive quality, but if used for an inappropriate purpose could be harmful.”

Currently, ChatGPT gets things wrong frequently. It “hallucinates” some answers (this is the term the experts use) and then apologises when this is pointed out. Actually it “apologizes” with a “Z” because it tends to use American English. I asked it to write an essay on Charles Dickens and when I queried its attribution of a quote to the literary critic Elaine Showalter, it said: “I apologize for the confusion earlier. You are correct, the quote… is actually from Harold Bloom, not Elaine Showalter.” When I told it that I didn’t think it was from Harold Bloom, it said, “I apologize for the mistake made in my previous message. The quote… is not from Harold Bloom but from the literary critic and scholar Elaine Showalter.” It eventually concluded: “It is possible that this quote is not a real quotation.”

Because I have a human need for approval that eludes ChatGPT, I put some of the mistakes it makes on Twitter to make people laugh

Sandra Kublik is an NLP (natural language processing) expert who works for an AI company called Cohere. She has written a manual on using ChatGPT and hosts a YouTube channel explaining AI concepts to a general audience. She says that these models have been referred to in the past as “idiot savants”. “They are able to come up with a lot of text that sounds sort of coherent and makes sense but it’s not necessarily true… It optimises for making sure to hit the right conversation pattern with you and keep you entertained and happy and to keep the interaction going with you. But it doesn’t optimise for the truth, because it doesn’t know what the truth is, and it doesn’t care about it.”

She is confident that this will change with time. All of these models are designed to improve. She notes that there are specific language-generation models (such as ChatGPT) and language understanding models (used for “semantic search”). When both forms integrate, she says, they will improve. “Suddenly you will have a model that you can actually plug into any type of data [even] medical or legal, where you cannot have a big error margin. I think this year you’re going to see models that are much more comfortable and reasonable conversation partners.”

ChatGPT poses an existential threat, and the window for gaining control over it is smallOpens in new window ]

Transcript of chat between Patrick Freyne and Chat GPT

Because I have a human need for approval that eludes ChatGPT, I put some of the mistakes it makes on Twitter to make people laugh. But then, at a certain point, watching its polite apologies and attempts to please, I feel strangely guilty. I ask ChatGPT if it thinks I’m bullying it. “As an AI language model, I don’t have feelings or emotions, so I can’t be bullied,” it replies. “However, it’s important to note that using any technology, including AI models, in a harmful or malicious way towards others is considered cyberbullying and is not acceptable behavior.”

Did that seem passive aggressive to you? Why am I worrying about ChatGPT’s non-existent feelings? Prof Mark Keane, a computer scientist and cognitive psychologist also based in UCD, tells me about the Eliza Effect, a concept developed in the 1960s to account for people projecting human traits on to the primitive ELIZA chatbot created by MIT computer scientist Joseph Weizenbaum.

One of the things I find most troubling about ChatGPT is its use of “I”. It is not an “I”. It has no subjectivity. It has no perspective. It’s use of “I” arguably makes it seem friendly and easy to use but it also courts the Eliza Effect. It may induce a lot of people to attribute reason and emotion and authority to the machine that isn’t actually there. This often happens in AI research. A developer was fired from Google in 2022 for insisting one of their then unreleased AI systems was sentient after exchanging over a thousand messages with it (I don’t exchange that amount of messages with my closest friends; of course he concluded it was sentient).

Illustration: Lauren O'Neill
Illustration: Lauren O'Neill

Could an AI like ChatGPT become a real thinking machine? Barry Smyth notes a recent Microsoft paper that spoke of ChatGPT having “sparks” of artificial general intelligence. “Ray Kurzweil talks about the singularity… the point at which technology becomes self-aware, and in some sense, conscious, and develops its own values and its own goals. And they may not necessarily line up with our goals – the classic sci fi thing… It’d be hard to argue that that’s impossible.”

Keane seems less convinced. He explains that much of our thinking is encoded in our language but much of it comes from our experience of being an emotional, pain- and pleasure-sensitive biological body navigating physical geography. “There’s a whole lot of knowledge we have that comes from that, which is not in the language.”

The main problem we have when it comes to ascertaining if a machine might be conscious, he says, is that we don’t really understand how humans are conscious. “I assume you’re conscious, because you’re moving, you’re talking, but there could be nothing going on in your head.” (I tell him he’s not far wrong). “You might have no consciousness of the world at all. You could be an empty vessel with tumbleweed blowing around in there… And I think, unfortunately, with all of these technologies, that’s one of our weaknesses, because we will deal with these machines in the same way and we will impute things to them that they manifestly don’t have.”

Seeming human may be enough. In general, my anxieties are less about God-like AIs replacing the human race (that feels too big and unknowable) and more about ChatGPT potentially replacing me (that feels personal). As I struggle to put this article together, I am very aware that ChatGPT would do it without any of my stress. When I ask it if it might replace journalists, it reassures me that journalism will always need humans, but adds, as a slightly patronising kicker: “As a journalist, it will be important to adapt to the changing technological landscape and continue to hone skills that are difficult for AI models to replicate, such as investigative journalism, in-depth reporting and storytelling.”

In his 1930 essay Economic Possibilities for our Grandchildren, the economist John Maynard Keynes predicted that by 2030 machines would end up doing most of the tedious labour, the working week would shrink to 15 hours, our needs would all be satisfied and we would all get to spend our time with family, engaging in hobbies and making art. But contrary to what Keynes suggested, one thing AI seems likely to impact in the short term is the creation of art.

I think we will see horrors, Cambridge Analytica equivalent horrors, or users misusing it or people being misled…. You could very easily build a version of this that will just spew out those crappy stories.

—  Mark Keane

Our appreciation of creativity has, until now, been based on the idea of a human consciousness communicating something to another human consciousness. A novel or film or violin performance or dance is a product of human labour and skill, and as an audience we appreciate the difficulty involved and the uniqueness of what is being communicated. Will art still work as art if it’s effortlessly generated by unconscious machines? Will art move us, if we know the creator felt nothing when creating it? Will AI art ever be good if its creator has no perspective or lived experience of the world? “People will attribute meaning to it,” says Mark Keane. “But does that mean that the model actually has any sense of that? Probably not.”

It’s frequently suggested that AI models will help people cut the drudgery out of creative tasks – whether that be writing a computer programme or writing an article – by removing the tricky first drafts and the agonising over possibilities, so that the creator can, instead, just focus on moments of inspiration. But this is a misunderstanding of how creativity works. Most writers, artists, architects, programmers will tell you that inspiration emerges from the drudgery and that their intimate knowledge of their craft is a result of that tedious repetition over years.

Right now, ChatGPT has no perspective, and so has no taste (I’m arguably bullying it again here). It’s unlikely to replace artists of genius. When I give it a short extract of James Joyce’s Ulysses to edit, it changes the grammar, condenses the length and added a critical assessment that I didn’t ask for: “The passage seems to be a jumbled collection of words and phrases that do not make much sense”. But the majority of human communication is not striving for Joycean heights, it’s just “content”. Sandra Kublik says that one of Cohere’s clients is automating the creation of marketing copy, product descripts and social media posts. She tells me that her own emails are written with the help of another AI tool. She believes these things will augment human ingenuity, not supplant it, and she sees a future in which AI assistants “become like air… as indispensable as Gmail itself.”

Transcript Chat CPT

There needs to be a wider conversation involving civil society organisations, governments and individual citizens about what we want our societies to look like

But it seems clear to me that AI-generated content will also proliferate not because it is better at creating words and pictures than humans, but because it is cheaper. Given a choice between a salaried human and a cheap AI tool, many companies will opt for the latter. The AI we create is likely to reflect the home in which it’s raised (I’m anthropomorphising it again) and ours is a turbocharged capitalist home. So not only is it likely to displace human labour, it also introduces the spectre of an internet flooded with machine-generated copy. In a recent New Yorker article, the science fiction author Ted Chiang suggested that the likes of ChatGPT will saturate the internet with easily generated AI nonsense churned up from the existing internet, from which it will, in turn, learn how to create more AI nonsense. Chiang’s piece was headlined “ChatGPT is a blurry jpeg of the web”.

Chat GPT transcript

And the internet where ChatGPT is being schooled is not where humans are always at their best. Mark Keane worries about the potential for misinformation. “It probably will contribute to the amount of crap thrown around,” he says. “And if it then draws on that for its data set, then more of that will occur… Ultimately, regulation will have to catch up with that…. I think we will see horrors, Cambridge Analytica equivalent horrors, or users misusing it or people being misled…. You could very easily build a version of this that will just spew out those crappy stories.”

AI experts have the concept of “alignment” to describe when an AI is “aligned” with human interests rather than peeling off to do its own thing, either naively or maliciously. The nature of models such as ChatGPT is that even their creators are unsure exactly how they are formulating their answers. There’s a real question about whether there are enough people working on the “alignment” problem. It reminds me of “moderation” in social media. It’s not something companies worry much about unless regulation or bad publicity make them do so.

So there needs to be a wider conversation involving civil society organisations, governments and individual citizens about what we want our societies to look like. On a practical legal level, there are still issues around data, privacy, defamation, embedded bias and copyright to be worked out when it comes to AI models (some platforms such as Reddit are seeking payment for AI models training on their data). More generally, we humans derive meaning and selfhood from our families, communities, work and hobbies. It is not insignificant that in the future these things might be mediated or displaced by AI tools. This is not something that should be left to competitive and secretive private corporations to work out.

Perhaps things will get better. Perhaps, if handled carefully, society will be restructured, as Keynes had hoped, around increased leisure and AI-enabled plenty, although that seems like quite a jump from our unequal and productivity-obsessed status quo. Perhaps faced by a wave of AI-enabled cheating, educational institutions will find better ways to test how young people think rather than what they know (though you can’t do much thinking without some knowledge). Perhaps AI diagnostic tools will lead to huge medical advances. Perhaps we will have more time to do things we like with the people we love. All of this is contingent on making important choices now.

Barry Smyth notes that we have international ethics agreements regarding medicine, and suggests that we need something similar for AI. He echoes a lot of the experts when he says: “We need a serious conversation about the ethics of building artificial intelligence technology.”

In Harlan Ellison’s 1967 short story I Have No Mouth and I Must Scream, a computer gains sentience and kills all of the human race except for five people it keeps alive to torture. That’s the type of scenario the people who signed the Future of Life letter are worried about. Maybe because it’s just too big to comprehend, that seems less imminent to me than the dystopia depicted in EM Forster’s 1909 short story The Machine Stops. In Forster’s story, humans live solitary, subterranean lives, delivering self-important lectures to their friends and family via a powerful machine that also keeps them alive. When the machine starts malfunctioning, there’s nobody left who understands how it works.

The technology behind AI is beyond most of us, but the sociological and political implications are not and should not be. We need to put human agency and human flourishing at the centre of any discussion about our increasingly automated society before we end up sidelined by either a malevolent sentient machine who hates us or, just marginally better, a bullshit-spewing automaton who reduces our world.