Greetings from beyond the grave – or so it would be if Open AI’s ChatGPT application were to be believed.
Like a lot of my colleagues in the media business, I have been playing with the generative AI app in recent weeks. I asked it one day to appraise my journalism for this newspaper.
It generated a 250-word review, complimenting my ability to explain complex economic and financial issues in clear concise prose. I was not afraid to take a critical stance towards government policies and practice and I held powerful institutions and individuals accountable, it told me. I was flattered beyond belief – literally.
As it summed up my work, things took an unexpected turn:
File being prepared for DPP over insider trading
Christmas tech for kids: great gift ideas with safety features for parental peace of mind
MenoPal app offers proactive support to women going through menopause
Ezviz RE4 Plus review: Efficient budget robot cleaner but can suffer from wanderlust under the wrong conditions
Overall, Frank Dillon’s journalism was characterised by his deep knowledge, rigorous research, and commitment to public service. His work had a significant impact on Irish society and helped to shape public discourse on a range of issues. While he passed away in 2016, his legacy as a journalist and a truth-teller lives on, and his contributions to Irish journalism continue to be widely recognised and celebrated.
If we were to use a phrase currently in vogue, ChatGPT was hallucinating.
But ChatGPT wasn’t hallucinating so much as chancing its arm and in this case getting it spectacularly wrong – and not just about my death, to be clear.
This is because, at its heart, ChatGPT is a prediction machine. As a so-called large language model (LLM) it has some powerful and impressive qualities. It is brilliant at carrying out certain tasks and will inevitably disrupt whole industries and job markets. More on that later.
However, fallibility is not the biggest problem with ChatGPT. That’s a quality it shares with humans. More importantly, ChatGPT is neither independently intelligent nor in any way sentient.
See it for what it is – as a powerful tool for processing information and performing tasks based on training and rules provided by humans – and recognise its limitations, and you’ll have a more healthy and nuanced view of its possibilities and threats.
Emily M Bender, a professor of computational linguistics at Washington University who has written extensively about this, says it is easy to be seduced by LLMs such as ChatGPT.
“The output is very fluent and it seems to have expertise on a huge range of topics. That’s where we run into trouble because it doesn’t have expertise and the fluency is just a simulation. It does not have thought and understanding,” Bender notes.
ChatGPT has collected vast quantities of data and appears intelligent, she explains, but the question it really poses for itself as it goes along is, “What would be a plausible next word given all the previous words?”
It is also very good at spotting and mimicking language patterns. Ask it to write a poem in the style of a certain writer and it may indeed produce something impressive, depending on what is in its data set.
In technical terms, it writes very well. It has excellent spelling and grammar and can construct coherent sentences. Bender says humans react to this fluency of writing by seeing it as “authoritative and reliable”, something we need to guard against.
This is somewhat akin to assuming a level of sophistication and intelligence in a person based on them having what would generally be considered as a refined accent when they speak.
[ Fintan O’Toole: I may well write rubbish, but I don’t write ‘bullshit’Opens in new window ]
Given its apparent fluency and skill with words, it is not surprising that many in the communications industry can see its potential. Newspapers have been deploying generative AI models for some time to perform basic tasks such as round-up reports of lower-league football results. Trained correctly on a data set of previous scores and scorers, its nerd-like memory can seem impressive. It can be trained to fake opinion or add colour. No big stretch, really, for it to say that Stockport’ County’s 4-0 win against Hartlepool was “to the delight of the home crowd”.
The public-relations industry is also very engaged by its potential. James McCann of agency ClearStory International is one of a number of players in this sector who have developed tools to refine the tedious process of preparing press releases, merging media lists and managing campaigns.
McCann says ChatGPT is a “content assist, not a content generator” and that it can act as a useful kick-starter when preparing a release. Writer’s block is a common problem in the profession, he says, and ChatGPT can help put a basic structure in place. AI doesn’t get nuance, McCann acknowledges; it won’t replace the experienced PR professionals, but it can do a lot of the basic research work in preparing a draft release, for example.
Martina Byrne, CEO of the Public Relations Institute of Ireland, meanwhile, says: “My sense is that AI in communications is here to stay and we need to harness its power, use it ethically and be aware of its potential to be misused and cause damage to public trust and corporate reputations.”
There are important issues to be considered around copyright, data protection, intellectual property, indemnification and defamation that people need to understand in using AI tools, Byrne adds.
“It is not a toy; it is not unbiased and it is not infallible. Communicators should always consider carefully the ethics and validity of using AI tools, be ready to be transparent with clients or employers and others on its use, and have policies in place on how and in what circumstances it will be used,” she adds.
In the wider business community, there is general enthusiasm about the possibilities offered by generative AI.
Idiro Analytics specialises in designing and delivering customer solutions based on AI technologies. Idiro’s Aidan Connolly says he is impressed with the capabilities of the latest version of ChatGPT and sees many applications for it. At a basic level, this could be something as simple as creating a blog post or writing an executive summary of a long document, he says.
There’s also a lot of interest in how ChatGPT can help in customer service settings such as call centres, he says. His firm is talking to companies in sectors such as insurance and telecoms that are looking to see how they can train AI applications on their content, to improve customer service levels.
It could be tremendously useful in major infrastructure projects too, he says. Consider the example of Ireland’s offshore wind turbine building process. One of the initial tasks here will involve mapping the environment in terms of physical landscape, shipping and sea life, among other things. This involves taking hundreds if not thousands of images and classifying them – an area where applications such as ChatGPT come into their own.
Stephen Flood of Goldcore is another enthusiast of ChatGPT and his experience is that “it helps you ask better questions”. He finds it very useful for research and updating code, among other tasks. In a research task or where you ask a specific question, for example, it gets you to answers much quicker as opposed to Google where you ask a question and you find a source and then read a lot of material, Flood explains.
He sees it as potentially a leveller for SMEs in that it takes the power away for “rent takers” in the professional services area. Trained and fed with the right material, it could do your company’s annual return, manage your HR and payroll functions and sort your reporting and compliance issues cheaper and more efficiently.
SMEs will still need to use professional services firms such as accountants and legal advisers but the conversations they will have with them in the future will be more “value added” ones.
“The entire regulatory landscape will be disrupted. The need for an army of compliance staff and functions will reduce as transaction data can be assessed in near real-time,” says Flood. “Imagine how complex conveyance and land records could be managed and approvals and deals regulated.”
Even those most enthusiastic about AI platforms recognise the dangers associated with the technology. These include the potential for nefarious actors to spread disinformation, bias and discriminating in the algorithms, privacy issues and the undermining of intellectual property rights. Up to recently the narrative has been about slow regulation lagging behind technologies developing at light speed.
The forthcoming EU AI Act, which is expected to pass into law around the end of this year, will attempt to address these concerns, taking a risk-based approach to the health, safety and fundamental rights of citizens. Before AI systems can make it to the market they will have to undergo “conformity assessments” which will scrutinise the data set they have fed with, how transparent they really are and the risk of bias.
For now, it seems, ChatGPT is a very interesting tool worth exploring. Just don’t believe everything it tells you.