AI is making it more and more difficult to detect plagiarism and to reward originality — and what better way to do that than with a paper that’s written by an AI? We’ve all heard about ChatGPT and GPT-4 and all the madness that generative AIs are bringing, but surely researchers would know the difference, right? Right?
Wrong. Debby Cotton, director of academic practice at Plymouth Marjon University proved it through an academic paper, but she didn’t write the academic paper, and neither did her colleagues. Nope, ChatGPT wrote it.
Aptly entitled “Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT,” the paper was submitted to the journal Innovations in Education and Teaching International, where it was peer-reviewed by four experts. Initially, the researchers didn’t tell anyone that the paper was written by the AI — and no one figured it out.
“We wanted to show that ChatGPT is writing at a very high level,” Cotton told The Guardian.
“This is an arms race,” she added. “The technology is improving very fast and it’s going to be difficult for universities to outrun it.”
In the arms race, the text-generating AIs are currently coming on top. Neither the automated flagging nor the human reviewers were able to tell that the paper is AI-generated, and several studies have already highlighted this lag. Eventually, the paper authors informed the journal’s editors and the paper was flagged accordingly. However, this comes as a stark warning that even in the academic environment: we’re not safe from AIs even there.
“The use of artificial intelligence in academia is a hot topic in the education field,” the AI-written paper reads. “The use of chatAPIs and GPT-3 in higher education has the potential to offer a range of benefits, including increased student engagement, collaboration, and accessibility. However, these tools also raise a number of challenges and concerns, particularly in relation to academic honesty and plagiarism. This paper examines the opportunities and challenges of using chatAPIs and GPT-3 in higher education, with a focus on the potential risks and rewards of these tools and the ways in which universities can address the challenges they pose.”
Granted, there are a few caveats. The technology is still very new and while it’s bound to be disruptive, detectors could catch up to it. Another caveat is that peer-reviewers aren’t really used to looking for AI plagiarism, and as a side note, they’re usually providing unpaid work.
This comes at a time when universities are scrambling to make sure students don’t cheat using ChatGPT, but if even academics can get away with it, how will you enforce it on students?
“My colleagues are already finding cases [of AI-assisted cheating] and dealing with them,” Irene Glendinning, head of academic integrity at Coventry University in England, told the Guardian.
“We don’t know how many we are missing,” she added, “but we are picking up cases.”
It’s not just university-level education, either. Because ChatGPT is so easy to access, students in secondary or even primary school have started using it. In fact, also according to The Guardian, some professors are figuring out that some students cheat because the quality of writing is too good. But even if you have a strong suspicion of cheating, proving it beyond doubt is nigh impossible. So until detectors can flag AI usage effectively, universities and schools will continue to scramble.
For now, however, there’s no guarantee that they will ever be able to do so because, as Cotton mentioned, it’s an arms race and we can expect AIs to become even more subtle as time goes on. For now, however, one thing’s for sure: plagiarism just got way more tricky to detect.