The other day I met a British academic who said something about artificial intelligence that made my jaw drop.
The number of students using AI tools like ChatGPT to write their papers was a much bigger problem than the public was being told, this person said.
AI cheating at their institution was now so rife that large numbers of students had been expelled for academic misconduct — to the point that some courses had lost most of a year’s intake. “I’ve heard similar figures from a few universities,” the academic told me.
Spotting suspicious essays could be easy, because when students were asked why they had included certain terms or data sources not mentioned on the course, they were baffled. “They have clearly never even heard of some of the terms that turn up in their essays.”
Parties’ general election manifestos struggle to make the figures add up
On his return to Web Summit, the often outspoken chief executive Paddy Cosgrave is now an epitome of caution
Surviving a shake-up: is restructuring ever good for staff?
The Irish Times Business Person of the Month: Dalton Philips, Greencore
But detection is only half the battle. Getting administrators to address the problem can be fraught, especially when the cheaters are international students who pay higher fees than locals. Because universities rely heavily on those fees, some administrators take a dim view of efforts to expose the problem. Or as this person put it, “whistleblowing is career-threatening”.
There is more at stake here than the injustice of cheats getting an advantage over honest students. Consider the prospect of allegedly expert graduates heading out into the world and being recruited into organisations, be it a health service or a military, where they are put into positions for which they are underqualified.
So how widespread is the cheating problem?
Panic about ChatGPT transforming educational landscapes took off as soon as the tool was launched in November 2022 and since then, the technology has only advanced. As I type these words, colleagues at the Financial Times have reported that OpenAI, which created ChatGPT, and Meta are set to release souped-up AI models capable of reasoning and planning.
But AI’s exact impact on classrooms is unclear.
In the US, Stanford University researchers said last year that cheating rates did not appear to have been affected by AI. Up to 70 per cent of high school students have long confessed to some form of cheating and nearly a year after ChatGPT’s arrival that proportion had not changed.
The auto-enrolment pension scheme seems good on paper, but how will it actually work?
At universities, research shows half of students are regular generative AI users — not necessarily to cheat — but only about 12 per cent use it daily.
When it comes to the number of student essays written with the help of AI, rates appear relatively steady says Turnitin, a plagiarism detection software group that has a tool for checking generative AI use.
It says students have submitted more than 22 million papers in the past 12 months that show signs of AI help, which was 11 per cent of the total it reviewed. More than six million papers, or 3 per cent of the total, contained at least 80 per cent of AI writing.
That is a lot of papers. But the percentage of AI writing is virtually the same as what Turnitin found last year when it conducted a similar assessment.
“AI usage rates have been stable,” says Turnitin chief executive Chris Caren. And as he told me last week, just because you are using ChatGPT does not necessarily mean you are cheating.
“Some teachers and faculty allow some level of AI assistance in writing an essay, but they also want that properly cited,” he says. “AI can be incredibly useful for doing research and brainstorming ideas.”
I’m sure this is correct. It is also true that university faculty are increasingly using AI to help write lesson plans and I know of some who have tested it to mark essays — unsuccessfully.
But I still find it worrying to think a sizeable number of students are using tools like ChatGPT in a way that is potentially risky for employers and wider society.
Some universities are already increasing face-to-face assessments to detect and discourage AI cheating. I am sure that will continue, but it would also be useful if academics were encouraged to expose the problem and not deterred from trying to fix it. As the scholar I spoke to put it, the purpose of going to university is to learn how to learn. These institutions are supposed to teach you to think for yourself and evaluate evidence, not just recite facts and figures.
Anyone who outsources their thinking to a machine is ultimately going to hurt themselves the most. — Copyright The Financial Times Limited 2024
- Sign up for Business push alerts and have the best news, analysis and comment delivered directly to your phone
- Find The Irish Times on WhatsApp and stay up to date
- Our Inside Business podcast is published weekly - Find the latest episode here