College professors dont know how to catch students cheating with AI


Leo Goldsmith, an assistant professor of screen studies at the New School, can tell when you use AI to cheat on an assignment. There's just no good way for him to prove it.
"I know a lot of examples where educators, and I've had this experience too, where they receive an assignment from a student, they're like, 'This is gotta be AI,' and then they don't have" any simple way of proving that, Goldsmith told me. "This is true with all kinds of cheating: The process itself is quite a lot of work, and if the goal of that process is to get an undergraduate, for example, kicked out of school, very few people want to do this."
This is the underlying hum AI has created in academia: my students are using AI to cheat, and there's not much I can do about it. When I asked one professor, who asked to be anonymous, how he catches students using AI to cheat, he said, "I don't. I'm not a cop." Another replied that it's the students' choice if they want to learn in class or not.
AI is a relatively new problem in academia — and not one that educators are particularly armed to combat. Despite the rapid rise of AI tools like ChatGPT, most professors and academic institutions are still resoundingly unequipped, technically and culturally, to detect AI-assisted cheating, while students are increasingly incentivized to use it.
Patty Machelor, a journalism and writing professor at the University of Arizona, didn't expect her students to use AI to cheat on assignments. She teaches advanced reporting and writing classes in the honors college — courses intended for students who are interested in developing their writing skills. So when a student turned in a piece clearly written by AI, she didn't realize it right away; she just knew it wasn't the student's work.
"I looked at it and I thought, oh my gosh, is this plagiarism?" she told Mashable.
The work clearly wasn't written by the student, whose work she had gotten to know well. And it didn't follow the journalistic guidelines of the course, either; instead, it sounded more like a research paper. Then, she read it out loud to her husband.
"And my husband immediately said, 'That's artificial intelligence,'" she said. "I was like, 'Of course.'"
So, she told the student to try again. She gave them an extension. And then the second draft came in, still littered with AI. The student even left in some of the prompts.
"[AI] was not on my radar," Machelor said, especially for the types of advanced writing courses she teaches. Though this was a first in her experience, it rocked her. "The students who use that tool are using it for a few reasons," she guessed. "One is, I think they're just overwhelmed. Two is it's become familiar. And three is they haven't gotten on fire about their lives and their own minds and their own creativity. If you want to be a journalist, this is the heart and soul of it."
Machelor is hardly the only writing professor dealing with assignments written by AI. Irene McKisson, an adjunct professor at the University of Arizona, teaches one online class about social media and another in-person class about editing. Because of the nature of the in-person course, she hasn't had a significant issue with AI use — but her online course is rampant with it.
"It felt like a disease," McKisson told Mashable. "Where you see a couple cases and then all of a sudden there's an outbreak. That's what it felt like."
So, what would McKisson tell students using AI to cheat?
"First of all, you signed up for the class," McKisson said. "Second of all, you're paying for the class. And third of all, this is stuff that you're actually going to need to know to be able to do a job. If you're just outsourcing the work, what is the value to you?"
It felt like a disease, where you see a couple cases and then all of a sudden there's an outbreak. That's what it felt like.
Why is it so hard for professors to catch AI cheating?
While AI detectors exist, they are unreliable, leaving professors with few tools to definitively identify AI-generated writing.
The technology is new, which means the detectors are new, too, and we don't have much research available on their efficacy. That said, one paper in the International Journal for Educational Integrity shows that "the tools exhibited inconsistencies, producing false positives and uncertain classifications." And, as with most tech, the results change depending on so many variables. For instance, a study in Computation and Language noted in the University of Kansas' Center for Teaching Excellence shows that AI detectors are more likely to flag the work of non-native English speakers than the work of native speakers. The authors argued "against the use of GPT detectors in evaluative or educational settings, particularly when assessing the work of non-native English speakers."
Like Goldsmith said, you can usually tell if something is written by AI — it's just really tough to prove it.
Of course, tech could be both the problem and the solution — tech fighting tech. After AI cheating startup Cluely went viral, other startups started racing to create a tool that could reliably catch Cluely, like Truely and Proctaroo.
Paul Vann, the cofounder of Truely, told Mashable that "resoundingly, people are worried" about AI and cheating. "People don't know how to deal with this type of thing because it's so new, it's built to be hidden, and frankly, it does do a good job at hiding itself." Truely, he claims, catches it.
Both Truely and Proctaroo can tell if an AI system is running in the background, but even the creators admit that these tools aren’t silver bullets. What if the AI assignment is an essay, turned in by hard copy? That's a bit tougher.
As AI gets better, detection may always be a step behind — the real answer might lie in rethinking how we produce assessments, not just the kind of surveillance we have to put on students.
Blurred boundaries: When is using AI considered cheating?

There are definitely students who want to use AI specifically to cheat. But because the use of generative AI in school is so new, it's also hard to know what counts as "cheating." Is it cheating to use spellcheck? Is it cheating to use AI to brainstorm? Where is the line?
"Professors have started to include statements about AI use in their syllabi, I have noticed in the past year," Sarina Alavi, a psychology PhD student and content creator at @psychandeducation, told Mashable. "Some are completely against it while others kind of say, 'Well, it’s fine to use, but just know the output is usually poor quality and remember plagiarism policies.'"
But institutions are behind the curve. There are often no standardized policies or training for professors.
For instance, Harvard's guidelines on the intersection of generative AI and academic integrity say only that specific schools should develop and update their policies "as we better understand the implications of using generative AI tools."
"In the meantime, faculty should be clear with students they’re teaching and advising about their policies on permitted uses, if any, of generative AI in classes and on academic work. Students are also encouraged to ask their instructors for clarification about these policies as needed," the guideline reads.
Yale's AI guidelines and the University of Arizona's guidelines, for example, say basically the same thing, leaving teachers with the tough job of deciding what to do with AI in their own classrooms.
"It's an academic freedom thing," McKisson said. "Your professor is free to teach their class however it needs to be taught. That's baked into the culture of academia, which I think is great."
It's helpful to have guidance, she said, and the schools give some of that. But what the schools don't provide is the practical guidance for how to effectively catch and combat AI cheating. McKisson, Machelor, and Goldsmith have all added lines into their respective syllabi telling students they can't use AI to complete assignments for them, but they all had to find that language on their own. McKisson, for her part, discovered the right language on a "Reddit thread of professors from all over the country who were talking about this issue."
"There was a whole discussion about rubrics, and I was like, 'Oh my gosh! That's it. That's the way to curb some of this, is to use the rubric to give people [who use AI] zeros,'" she said. "[Students are] going to keep doing it unless there's a negative consequence."
The result of all this ambiguity has led some educators to panic over a student cheating epidemic with no clear cure. Tech is advancing faster than policy, and it's hard for schools to keep up with the AI tools students are using. It's confusing for students and professors alike. Like U of A's guidelines read, "Students may not be aware that AI policies can and will vary between courses, sections, instructors, and departments, so take time to support them in understanding and abiding by different policies."
Alavi says that she uses AI for some class readings by uploading the PDF and asking AI for summaries, key takeaways, and talking points, which saves her "a lot of time because I can quickly read articles and not have to re-read them before class to have solid points to bring to class discussions." For writing, she might use AI for inspiration or if she's stuck on a transition sentence. "Of course, if I use anything generated, I’d put it in my own words because I find the output to sound robotic and generic," she said.
For some professors, it's even more clear-cut.
"If you're using it to write a paper for you, then of course I would consider that cheating," Goldsmith said. "But cheating is a mild word. It's just pointless. It's a waste of a huge amount of money that the students are paying or incurring as debt, in some cases lifelong debt. But it also just doesn't get you anywhere. And it's been very easy to spot as an educator."
This Tweet is currently unavailable. It might be loading or has been removed.
Why do students use AI?

It's finals week, and you're staring down the barrel of despair. Over the next two days, you'll have to write three essays, take one test online, and take one multiple-choice final in person. You have a project to do. You have makeup assignments to turn in. You have to maintain your GPA or you'll go on academic probation. There aren't enough hours in the week to both succeed and sleep, but generative AI could write your three essays, take that online test, and make flashcards for your multiple-choice final faster than you could make dinner. And you know your professors can't catch you because there's no simple way to prove ChatGPT wrote your essay.
For students facing academic and financial pressure, AI can seem more like a productivity tool than cheating. And, of course, everyone else is using it.
Would you be able to avoid the pull?
Writing is a pain in the ass. Nobody likes to write.
Alavi can, for the most part. She likes the subjects she's studying, and she wants to actually learn, and she knows AI can't replicate that. All the while, she says she understands the impulse for "students who are introduced to AI in high school or college" to use AI or rely on it. She says, thankfully, she's gone through a decade of academic training without it.
"I also really respect the time and intention my professors are putting into creating assignments with the purpose of promoting student learning, and I think relying on AI would not honor their hard work and also take away from my learning," Alavi said.
As Goldsmith says, "the whole purpose" of going to school "is to learn." If using AI is getting in the way of your ability to learn, there are other questions to ask.
"The hard work of writing and the hard work of reading and discussing is what the whole purpose of education is," Goldsmith said. "It's not to learn facts."
Goldsmith, who teaches screen studies, admits that his students, for the most part, hate the use of AI in art. But that doesn't stop them from using it for assignments. Why? "Because writing is hard."
"Writing is a pain in the ass," he said. "Nobody likes to write. You are a writer. I am a writer. We hate writing."
What could actually stop AI cheating?
For some college professors, a greater focus on pedagogy is the way to move forward. More in-class writing, more oral work, more iterative drafts, more pencil-and-paper tests, and maybe even promoting the use of AI for specific aspects of assignments.
Ironically, the most effective way McKisson has found to curb the use of AI is to, well, use AI.
"I actually fed every single one of my sets of discussion questions for the whole semester into ChatGPT and I asked it to help me AI proof it as much as I could," McKisson said. And it worked. Now her students have to send screenshots of social media posts and submit works cited and other work that ChatGPT can't necessarily do particularly well. After she implemented those changes, fewer students blatantly used AI, and she was left less frustrated.
Or, perhaps, we think about what the true value of education is. Goldsmith points out that if it is truly just "valued now as a piece of paper that you spend a lot of money on," perhaps we should all do a bit of reflection.
"It's inevitable that AI will be used and used productively in lots of fields," Goldsmith said. "But the push is something that may need to be resisted. Who's benefiting from it? And why?"
And, as McKisson said, the answers can't be purely to punish students who use AI and pretend it isn't going to be here for the long haul. She approaches teaching like a "partnership" between educator and student, and AI is forcing educators to "rethink how we teach and what the partnership agreements are like."
"My bigger question is how do you redesign higher education?" McKisson said. "We're not gonna solve it today… But the way we have designed a large chunk of higher education, especially the online-only stuff, is not going to work because it's so easy and cheap and rewarding to use AI tools."
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
What's Your Reaction?






