While grading essays for his world religions course last month, Antony Aumann, a professor of philosophy at Northern Michigan University, read what he said was easily “the best paper in the class.” It explored the morality of burka bans with clean paragraphs, fitting examples and rigorous arguments.
A red flag instantly went up.
Aumann confronted his student over whether he had written the essay himself. The student confessed to using ChatGPT, a chatbot that delivers information, explains concepts and generates ideas in simple sentences – and, in this case, had written the paper.
Alarmed, Aumann decided to transform essay writing for his courses this semester. He plans to require students to write first drafts in the classroom, using browsers that monitor and restrict computer activity. In later drafts, students have to explain each revision. Aumann, who may forgo essays in subsequent semesters, also plans to weave ChatGPT into lessons by asking students to evaluate the chatbot’s responses.
Gladiator II review: Don’t blame Paul Mescal but there’s no good reason for this jumbled sequel to exist
Spice Village takeaway review: Indian food in south Dublin that will keep you coming back
What time is the Katie Taylor v Amanda Serrano fight? Irish start time, Netflix details and all you need to know
“What’s happening in class is no longer going to be, ‘Here are some questions – let’s talk about it between us human beings,’” he said, but instead “it’s like, ‘What also does this alien robot think?’”
Across the US, university professors such as Aumann, department chairs and administrators are starting to overhaul classrooms in response to ChatGPT, prompting a potentially huge shift in teaching and learning. Some professors are redesigning their courses entirely, making changes that include more oral exams, group work and handwritten assessments in lieu of typed ones.
The moves are part of a real-time grappling with a new technological wave known as generative artificial intelligence. ChatGPT, which was released in November by the artificial intelligence lab OpenAI, is at the forefront of the shift. The chatbot generates eerily articulate and nuanced text in response to short prompts, with people using it to write love letters, poetry, fan fiction – and their schoolwork.
That has upended some middle and high schools, with teachers and administrators trying to discern whether students are using the chatbot to do their schoolwork. In the US, some state school systems, including in New York City and Seattle, have since banned the tool on school wifi networks and devices to prevent cheating, although students can easily find workarounds.
In higher education, colleges and universities have been reluctant to ban the AI tool because administrators doubt the move would be effective and they don’t want to infringe on academic freedom. That means the way people teach is changing instead. “We try to institute general policies that certainly back up the faculty member’s authority to run a class,” instead of targeting specific methods of cheating, said Joe Glover, provost of the University of Florida.
“This isn’t going to be the last innovation we have to deal with,” said Joe Glover, provost of the University of Florida. That’s especially true as generative AI is in its early days. OpenAI is expected to soon release another tool, GPT-4, which is better at generating text than previous versions. Google has built LaMDA, a rival chatbot, and Microsoft is discussing a $10 billion investment in OpenAI. Silicon Valley start-ups are also working on generative AI tools. An OpenAI spokesperson said the lab recognised its programmes could be used to mislead people and was developing technology to help people identify text generated by ChatGPT.
At many universities, ChatGPT has now vaulted to the top of the agenda. Administrators are establishing taskforces and hosting university-wide discussions to respond to the tool, with much of the guidance being to adapt to the technology.
At schools including George Washington University in Washington, DC, Rutgers University in New Brunswick, New Jersey, and Appalachian State University in Boone, North Carolina, professors are phasing out take-home, open-book assignments – which became a dominant method of assessment during in the pandemic but now seem vulnerable to chatbots. They are instead opting for in-class assignments, handwritten papers, group work and oral exams.
Gone are prompts such as “Write five pages about this or that.” Some professors are instead crafting questions they hope will be too clever for chatbots and asking students to write about their own lives and current events.
Universities are also aiming to educate students about the new AI tools. The University at Buffalo in New York and Furman University in Greenville, South Carolina, said they planned to embed a discussion of AI tools into required courses that teach entering or freshman students about concepts such as academic integrity. “We have to add a scenario about this, so students can see a concrete example,” said Kelly Ahuna, who directs the academic integrity office at the University at Buffalo. “We want to prevent things happening instead of catching them when they happen.”
The misuse of AI tools will most likely not end, so some professors and universities said they planned to use detectors to root out that activity. The plagiarism detection service Turnitin said it would incorporate more features for identifying AI, including ChatGPT, this year.
More than 6,000 teachers from Harvard University, Yale University, the University of Rhode Island and others have also signed up to use GPTZero, a program that promises to quickly detect AI-generated text, said Edward Tian, its creator and a senior at Princeton University.
Brainstorm and debug
Some students see value in embracing AI tools to learn. Lizzie Shackney (27), a student at the University of Pennsylvania’s law school and design school, has started using ChatGPT to brainstorm for papers and debug coding problem sets. “There are disciplines that want you to share and don’t want you to spin your wheels,” she said, describing her computer science and statistics classes. “The place where my brain is useful is understanding what the code means.”
But she has qualms. ChatGPT, Shackney said, sometimes incorrectly explains ideas and misquotes sources. The University of Pennsylvania also hasn’t instituted any regulations about the tool, so she doesn’t want to rely on it in case the school bans it or considers it cheating, she said.
Other students have no such scruples, sharing on forums such as Reddit that they have submitted assignments written and solved by ChatGPT – and sometimes done so for fellow students. On TikTok, the hashtag #chatgpt has more than 578 million views, with people sharing videos of the tool writing papers and solving coding problems. One video shows a student copying a multiple-choice exam and pasting it into the tool with the caption: “I don’t know about y’all but ima just have Chat GPT take my finals. Have fun studying.”
This article originally appeared in The New York Times
‘I was in awe’: Irish academics react to AI threat on campus
The emergence of an artificial intelligence (AI) tool which can produce essays within seconds has sparked alarm on college campuses and prompted many Irish higher education institutions to revamp their policies on academic integrity and how they assess students.
ChatGPT, released in November by the artificial intelligence lab OpenAI, generates accurate and nuanced text in response to short prompts. Videos are circulating online with millions of views which show students using it to write assignments.
Quality and Qualifications Ireland (QQI), the watchdog for standards in Irish higher education, said many higher education institutions have already initiated full reviews of policies in relation to assessment and academic integrity.
“It is a matter for institutions to take time to explore the impact of these tools on the system and understand how they may harness new technological tools such as ChatGPT, while balancing out any potential risks to academic integrity,” a QQI spokeswoman said.
The National Academic Integrity Network, a group of Irish academics established by QQI, met last month to discuss ways to adapt assessments in order to minimise the threat of cheating and guidance for students about the risks and ethics of these tools.
Billy Kelly, the network’s chair and former dean of teaching and learning at DCU, said he was stunned by the power of ChatGPT to produce well-written essays within seconds.
“I was in awe,” he said. “You’re getting pretty fluent answers back ... This has moved the dial on assessment.”
He said he entered an essay title on ChatGPT for a history module of the type routinely given to first-year students.
“Within five seconds it produced what was a credible first stab at the question. On the face of it, pretty good journalism with references, although it wasn’t quite an academic article,” he said.
Articles were less successful, he said, when there were very specific essay titles with less published information to draw on, while there was also a risk of facts or quotes being misrepresented.
ChatGPT – which stands for “generative pre-trained transformer” – is part of a new wave of artificial intelligence. It is the first time such a powerful tool has been made available to the general public through a free and easy-to-use web interface.
An OpenAI spokesperson has said the lab recognised the tool could be used to mislead people and was developing technology to help people identify text generated by ChatGPT.
Academics are hopeful that detectors will soon be available to root out cheating.
Turnitin, a plagiarism detection service widely used by colleges, said recently it would incorporate more features for identifying AI, including ChatGPT, later this year.
A plenary session of the National Academic Integrity Network last month involved discussions of AI chatbots and members from Atlantic Technological University and Griffith College shared information on work they are undertaking in this space.
Mr Kelly said the mood among higher education staff at the meeting was “somewhere between alarm and concern”.
“Some see it as an existential threat to assessment, but it’s only a threat if we don’t adapt,” he said.
The new technology has already prompted some colleges to plan changes in how they assess students in the next academic year, with a greater emphasis on oral presentations, along with more specific and personalised essay titles.
While some education institutions abroad have banned the technology, many colleges are opting to prepare students for the growing use of this technology.
A QQI spokeswoman said Irish higher education institutions are free to draw up their own policies on how AI is used on campus.
“Artificial intelligence can be used as an educational tool and students will need to understand how to use AI technology legitimately. It is important that institutions clearly communicate to their students under what circumstances the use of artificial intelligence and other tools will be considered a threat to academic integrity,” she said.
The National Academic Integrity Network is meeting next month, titled: “What to do about AI text generators? Next steps for educators”. It will feature contributions from a US-based academic and a discussion on practical measures educators should consider.
She said students outsourcing their work to an AI system is just as problematic as students outsourcing their work to a person providing a contract cheating service or a relative.
“At this stage, we understand that AI systems still have some limitations – for example, they can be weak on referencing. Understanding these weaknesses may provide institutions with a way of designing their assessments to combat the risk of cheating using artificial intelligence. Artificial intelligence may also lead to the prioritisation of higher-order learning that cannot be automated,” she said.
While flaws with the current technology have been flagged, OpenAI is expected to soon release another tool, GPT-4, which it says will be better at generating text than previous versions. Google has built its own chatbot, LaMDA, and several other start-up companies are working on similar forms of generative AI tools. – Carl O’Brien