10 Ways AI Is Ruining Your Students’ Writing
And how to help them see that AI cannot craft good essays.
Originally published in the Chronicle of Higher Education by Wendy Laura Belcher on September 16, 2025
Many professors in the humanities are giving up on assigning papers. Working against the tsunami of AI writing is exhausting and disheartening. Those with heavy course loads can’t do it anymore.
But producing a generation that can’t write — which means, in a profound way, a generation that can’t think — is something my heart can’t take. What follows is the lecture I give my literature students on the 10 reasons why ChatGPT and other tools powered by large language models (LLMs) cannot help them write a good paper.
This lecture is not a prophylactic. I still get bad AI-assisted papers. But it provides comfort to students who wonder if they are Luddite losers for loving to write on their own. And for the rest, what else is teaching but telling students the truth, even when they don’t want to hear it?
No. 1: AI’s Polonius problem (stating the banal)
I never used to receive essays with banal arguments. I would get papers with no argument, or vague arguments, or totally off-base arguments, but never banal ones. Now I regularly get papers about the hero’s journey. The conflict between tradition and modernity. The individual against the community. How fixed boundaries between X and Y are destabilized by Z.
Make it stop!
LLMs are about predicting the next most likely word — which, by definition, is the most obvious. And literary analysis is precisely about what is not obvious.
Students continue to believe that brainstorming with AI helps them get to good arguments, but I have not seen any proof of that. AI substitutes plot summary for analysis, invents plausible quotes, and equates argument with making broad generalizations about “culture” and so on. Hence the Polonius problem, named for the character in Hamlet who pompously states the obvious.
No. 2: AI’s windbag problem (bloated emptiness)
Here’s an AI-assisted example from a student’s paper: “Africa is home to some of the world’s most diverse literary works.”
That is a sentence. It is grammatically correct. It has no typos. It is snappily short. It is on topic for the course. And … it means nothing. AI tossed together extremely common phrases — “home to” and “some of” and “world’s most” — to sound good. Then it appended an empty phrase. What are “diverse literary works”? And what makes an individual work “the most diverse”? It deploys many genres? Includes words in many languages? Depicts many ethnicities interacting? In fact, very few texts do any of those things. Which doesn’t matter, because Africa could have two such texts and still have “some of” the world’s “most diverse” texts.
But let’s stop being coy. We all know what the AI — most likely developed in America — had in mind when it used the word “diverse.” It meant that African literary works have lots of Black characters. But Black people are not diverse in the Africa context, they are the norm. Further, if having Black people is the measure of diversity, then Africa is not home to “some of” but to “almost every single” such work.
Like a doll, this sentence is pretty but empty. And the biggest sign of its emptiness is that nothing about diversity (or race) is ever mentioned again in the paper. It is a dead-end sentence, common in AI-generated texts. Which brings me to my next point.
No. 3: AI’s variation problem (fragmenting unity)
Regular patterns aid readability. All the research shows that. Yet AI bots are designed to avoid repetition at all costs (it’s called the “repetition penalty”).
AI-assisted papers often refer to something once by its proper name and then substitute it throughout the rest of the paper with referents. For instance, it will give the name of the epic’s hero, Sunjata, and then refer to him as the “main character,” and as the “protagonist,” the “central figure,” the “key player,” and so on. Each variation causes cognitive load for the reader: Are we still talking about the same person? This anti-repetition bias hinders the reader’s understanding.
Now I can’t really blame AI. This terrible advice about varying words is regularly given in composition classes, especially at the middle- and high-school levels. And sure, I am all for varying verbs and adjectives, adverbs and conjunctions. No one wants to read a string of sentences repeating the adjective “compelling.”
But AI tools always vary nouns as well — including that of the paper’s main subject. Indeed, they avoid repetition so much that they can’t keep a throughline. It’s why so many AI-assisted papers drift farther and farther from the announced subject, ending somewhere else entirely.
No. 4: AI’s Roman genitive problem (stringing together abstractions)
Many sentences in AI-generated papers consist of tossing a random literary term into a stream of Roman genitives (“x of y” phrases). Noun phrases are modular, so AI finds it easy to slot them into sentences and does so excessively. Like this line generated for me by Stanford University’s Storm app: “The novel The Palm-Wine Drinkard has been recognized for its contributions to both Nigerian literature and the global literary landscape, symbolizing the complexity of African narratives in the face of colonial legacies.”
Once again, we have a fancy yet nonsensical sentence. What is being symbolized? The novel? Its contributions? How does a novel symbolize narrative? How do agentless complexities face anything? The sentence has no meaning. Further, what’s the dreadful implication? Were African narratives simplistic before these violent legacies and now, post-European encounters, are complex and superior?
Another example of the misuse of literary terms was this line I was given by ChatGPT: “The Virgin Mary giving water to a dog in her shoe is a metaphor for mercy.” But it’s not a metaphor for mercy, it is mercy itself. AI distorts meaning by sprinkling in literary terms.
No. 5: AI’s causation problem (connecting the unconnected)
AI frequently turns what emerges from a certain context into what creates that context. An example from an AI-assisted paper: “This surface reading of the story reinforces a common religious dichotomy.” But the interpreter’s reading doesn’t “reinforce” any principle. Rather, the reading depends on it. The bad reading “results from” a binary understanding of religion.
In other words, AI frequently misstates how one idea relates to another by suturing sentences together with common academic verbs like “highlights,” “underscores,” or “emphasizes.” The sentences are grammatical and can sound smart, but they distort what is affecting what.
No. 6: AI’s anti-human problem (removing the interpreter)
Consider this example I read in an AI-assisted student paper: “The folktale subverts its apparent message of Christian triumph by exposing moral contradictions at the heart of Christian practice.” The folktale did not deliberately subvert itself or bravely expose anything. Rather, the text’s interpreter noticed how the text failed to achieve its aim. This sentence should read something like, “This folktale’s overt Christian triumphalism is undercut by its exposure of Christianity’s failures.”
AI often generates sentences in which the text is doing something when, really, it is the human interpreter who is doing it through analysis. AI will take your ideas, erase your agency as the author, and present your texts (and abstractions) as agents. This is partly due to its phobia of the “I” in writing. AI is subtly teaching students to think of themselves as irrelevant in every possible way.
No. 7: AI’s inflation problem (evaluative adjectives)
Student papers never used to have a lot of adjectives. But AI-generated prose is hyper-adjectival — almost no noun passes without getting a positive or negative modifier. Here’s an example from a student’s AI-assisted paper: “From the vibrant Yoruba marketplace to the silent void of Elesin’s prison cell, the play unfolds as an exploration of liminality — those fragile thresholds where life and death, duty and hesitation, individual and communal all collide.”
Doesn’t that sentence sound elegant? When you started reading it, didn’t you think, “Hey, this is pretty good!”?
But it’s all wrong. For one, Elesin’s cell in Wole Soyinka’s Death and the King’s Horseman is anything but “silent” (wrong adjective). For another, duty and hesitation are not a semantically sensible pair and certainly don’t match the other pairs. For a third, why is the threshold where abstractions collide “fragile” (wrong adjective)? Do weak borders collapse under the pressure of colliding binaries such that thresholds are erased and individuals become communities and vice versa? In short, the insertion of adjectives into a stream of abstractions ends in nonsense.
No. 8: AI’s racism problem (blaming the victim)
When I recently asked ChatGPT for an argument about Amos Tutuola’s The Palm-Wine Drinkard, I was told that the novel “reframes the heroic journey as a pursuit of pleasure .. rather than … moral duty.”
I never used to receive student papers that moralized. Now I constantly receive AI-assisted papers that read like sermons. They frequently judge Africans for failing to meet, according to one student paper, “Western ethical standards.” Many student papers now state that Africans, rather than colonialism, are responsible for their own terrible fates; that polygamy oppresses women; that a story’s impoverished widow, faced with a horrific choice, is “internalizing her helplessness” (whatever that means).
Last year, when I asked Google’s LLM Gemini to “give me a bad argument about African literature,” it generated this example: “The only authentic African literature is written in African languages.” On the face of it, that reply might not seem racist. But, in fact, AI determined as “bad” the most famous argument ever made about African literature, by the late Ngugi wa Thiong’o, who believed that Africans should write only in their own languages and told an interviewer that to write in the colonizer’s language was a form of “enslavement.” AI took this famous argument and flipped it in the racist direction. Undergraduates are not catching this kind of insidious racism and are duplicating it.
No. 9: AI’s plagiarism problem (stolen argumentation)
I’ve never before received so many essays with derivative arguments. In my most dramatic example, in a paper on Ferdinand Oyono’s brilliant novel, Une Vie de Boy, one student wrote that the novel had “something I like to call ‘quiet resistance.’”
Unfortunately for the student, AI was plagiarizing my argument (published in 2007 in LIT Magazine) about that very text and what I called its “indirect resistance.” To add insult to injury, I had lectured about this theory in class. So the student wasn’t listening, generated this paper, and did not check it.
AI makes students confident in all the wrong ways.
No. 10: AI’s flat-out wrong problem (basic facts twisted)
Google Chrome’s AI Overview will still tell you, despite my feedback, that “Famous African novels translated into English include classics like Chinua Achebe’s Things Fall Apart.” Of course, that book was not translated into English — Achebe wrote it in English. Which means that AI cannot get basic facts right about even the most famous African novel ever written.
I keep playing with AI, but the error rate is extraordinary. I have yet to ask one of these tools a single question without finding an error somewhere in the answer.
What next? I could go on. Like how AI wrongly inserts scientific terms into humanities writing. Or how it has a “not … but” tic (e.g., “Many of the stories center the Virgin Mary not as a distant intercessor but as a direct actor in domestic injustice”). I would love to know what else readers have noticed about AI writing.
Now if you’re thinking that this article will only help students to cheat better, don’t worry. I have repeatedly chastised AI for these flaws, and it still does all of them. Indeed, it frequently commits them in the very sentences it uses to agree with me about these flaws. That’s because the fundamental principle of AI is taking what is common (and clichéd) and turbocharging it. It cannot actually think, it can only string together predictable words and phrases.
So if you get an essay that is banal, bloated, and meandering — much less one with an absent student interpreter and ill-connected ideas that are judgmental, racist, plagiarized, and/or factually wrong — simply stop grading, send the student the link to this very page, and ask them to rewrite. Tell them that you don’t know whether they used AI to do the assignment, and that’s not the point. What you do know is that their essay has typical AI flaws and the student must rewrite it without those flaws to earn a passing grade. If all writing is in the rewriting, then students will learn something through this process.
Writing is thinking. That’s why we assign papers to our students. Let’s not give up on that.
Wendy Laura Belcher is a professor of African literature at Princeton University and the award-winning author of Honey from the Lion, Abyssinia’s Samuel Johnson: Ethiopian Thought in the Making of an English Author, and the best-selling Writing Your Journal Article in Twelve Weeks: A Guide to Academic Publishing Success.