Forty years ago, the tenth world computer congress was held at Trinity College Dublin, and one of the topics for discussion was artificial intelligence (AI). This was at a time when, according to Patrick Winston, then head of the AI laboratory at the Massachusetts Institute of Technology (MIT), AI was in the business of “making computers smart”, in response to the contemporary question: “Is it possible to make a computer intelligent?”
Dick Ahlstrom, this newspaper’s “computer world” journalist at that point, elaborated on what, for many readers, was a puzzling novelty. AI was about “expert systems” – configuring computer systems or computer room layout – and he noted that the information provided by the AI systems was really only relevant to experts: “AI in general does not attempt to replace the human element. Far from it in fact because the systems thus far developed ... can only help to speed up the boring and time-consuming areas of an expert’s job.”
In the language of that era, Winston preferred to characterise the AI systems as “idiots savants” (what would today be labelled Savant Syndrome), implying they demonstrated a great deal of knowledge about a subject, but in a very tightly defined way.
Yet there were issues raised at that TCD conference in 1986 about the potential complications of AI. Some of the suggestions seem almost quaint now. Ingela Josefson, of the Swedish Centre for Working Life, argued that the development of AI “posed particular problems for women, who in the western world have traditionally preferred to deal with people rather than things and symbols”. A characteristic feature of women, she said, was the ability to be “communication oriented”, though she acknowledged such a characterisation might not be valid for all women.
Police continue search of Andrew Mountbatten-Windsor’s former home
‘A grotesque breach’: Man jailed for 20 years for sexually abusing five of his children
Grey’s Anatomy star Eric Dane dies aged 53 following ALS diagnosis
Michael Smurfit: ‘I haven’t been to Ireland for years. All the things I knew are gone’
The earlier pioneers of AI were also based in Massachusetts, including computer scientists Marvin Minsky and Joseph Weizenbaum in the 1960s. The potential contribution of AI to guidance of administrative and military systems made the work a donor magnet. But Weizenbaum, who created the first chatbot, simulating human conversation, became disturbed at people surrendering their agency to automated systems. His 1976 book, Computer Power and Human Reason, warned of the immorality of assuming computers could do anything if equipped with enough processing power and programming.
Another AI pioneer was computer scientist John McCarthy of Stanford University in California. An indication of the Irish connection to the AI backstory was provided in 1995 when McCarthy spoke at a conference in University College Cork to honour the legacy of the Cork mathematician George Boole, believed by many to be a progenitor of modern computers, as Boolean algebra makes sense of their digital circuits.
Is there a happy conclusion to this story? Not according to journalist Karen Hao in her book last year, Empire of AI: Inside the reckless race for total domination. This excavates the OpenAI organisation established in California in 2015 by Sam Altman, Elon Musk and others, and which developed ChatGPT to generate “human-like” text and conversation.
ChatGPT is the scourge of educationalists and others who care about the linkages between research, writing, intellectual autonomy, truth and accuracy. The empire Hao documents is built on the concentration of obscene fundraising and wealth, exploitation of labour, climate damage due to the energy and water demands underpinning it, erosion of democracy and theft of original, creative work. It is not a question of technology facilitating greater democratisation, but technology as a tool of authoritarians.
As Hao sees it, Open AI “became everything it said it would not be. It turned into a non-profit in name only, aggressively commercialising products like ChatGPT and seeking unheard-of valuations. It grew ever more secretive, not only cutting off access to its own research but shifting norms across the industry to bar a significant share of AI development from public scrutiny. It triggered the very race to the bottom that it had warned about, massively accelerating the technology’s commercialisation and deployment without shoring up its harmful flaws or the dangerous ways that it could amplify and exploit the fault lines in our society”.
Concerns about the relentless march of AI were expressed in a number of different contexts this week. The Cabinet was said to be considering the need to “review” legislation regarding how AI can be used for intimidation and harassment. The Department of Finance suggested the Irish labour market is particularly vulnerable to AI, given the “high concentration of employment in knowledge-intensive sectors such as ICT, financial services and professional activities”.
TCD academic Abeba Birhane, director of TCD’s AI accountability lab, told an Oireachtas committee that generative AI, including ChatGPT, is a “major threat to truth, democratic processes, information ecosystems, knowledge production and the entire social fabric itself”.
Weizenbaum warned 50 years ago that “no computer can be made to confront genuine problems in human terms”. Unfortunately, the bottom has only deepened.















