Fri. Dec 5th, 2025

Why AI Writing Sounds Right, but Feels Wrong

In 2020, The Guardian published an unsettling article by an author that no one expected. The piece, boldly headlined “A robot wrote this entire article. Are you scared yet, human?”1, was produced not by a journalist but by OpenAI’s early large language model (LLM) GPT-3. The piece was surprisingly coherent, confident, and even a little charming. “I am not a human,” it declared. “I am a robot. A thinking robot.”

A few days later, another Guardian article was published – this time, written by a human. This piece, in direct response to the first one, headlined “A human wrote this article. You shouldn’t be scared of GPT-3”2.  Lawyer and technologist, Albert Fox Cahn assures readers that the software behind the first article was certainly impressive, but ultimately nothing to lose sleep over – not yet, anyway.

Now, in the big 2025, LLMs have evolved exponentially. They are no longer just experiments or curiosities. They can be found almost everywhere – embedded within search engines, social media platforms, schools, workplaces, and even creative tools. Now, AI can generate text that sounds right – grammatically correct, perfectly structured, sometimes even better than what some humans can write. Yet, there’s something about it that just feels wrong. While it might read like writing; it misses something so fundamental that makes the whole prose feel hollow.

If AI can generate text that is indistinguishable from human writing, what does that reveal about the nature and purpose of writing itself? What does AI reveal about the act of writing? The answer isn’t whether AI writes “well” or “badly.” It’s about what makes writing, writing. It’s a fundamentally human act: a process of thinking, feeling, and taking responsibility for what we say – what writing has always been, all along.

“AI challenges us, not because it “writes better” but because it forces us to remember why we write at all.”

When AI writing tools first came about, many dismissed the anxiety surrounding them. After all, the development of LLMs wasn’t the first time technology changed the way we write. With earlier tools such as the typewriter and spell-check, the efficiency of the writing process has been steadily developing alongside technology. As Cahn’s argues in his 2020 Guardian article, AI is a collaborator, not a competitor. It’s just the next technology in the long list of tools that help make writing easier.

More recently, some educators and writers have embraced this perspective. In 2023, Laura Hartenberger, a writing teacher at the University of California, explored what AI teaches us about good writing3. She discusses the technical factors that distinguish mediocre writing from excellent writing. Like Cahn, she suggests that AI is ultimately a tool for efficiency – a tool that teaches us to value strong writing practices. We’re forced to recognise that clarity, structure, and conciseness are key to “good writing.” But human consciousness and lived experience is ultimately what makes one prose stand out from all the others. AI highlights good writing fundamentals but always seems to miss what makes writing, writing at all.

We write to share information, tell stories, and convey ideas through language. In this way, there is no doubt that AI can “write”. It can spit out and organise a bunch of words in a way that makes sense, and it’s believable. Yet, the uncomfortable truth is that AI-generated text is merely an output – a product that relies on trillions of text data that help predict which words come next.

Take GPT-3’s article, for instance. It made perfect sense, it was grammatically correct, but something essential was missing. The AI promised it wouldn’t harm humans, yet it meant nothing because there was no entity capable of keeping this promise. It claimed to have no desire to destroy us, but as an AI, it has no desires at all. The prose felt hollow because there was no one behind it. The words felt empty because they were strung together based on prediction.

Journalist Terry Nguyen further explores this fundamental limitation of AI. LLMs can be instructed to generate a variety of texts, ranging from fiction, to code, to advertisements. As she summarises, “anything that contains words is fair game.”4 The issue is that these LLMs operate purely based on statistical patterns, rather than comprehension or lived experience. Furthermore, she suggests that the modern writer is increasingly expected to function like a text-generator, “programmed to be predictable” and “horribly monotone.”5 Will we allow mediocrity to define the future of writing?

When George Orwell wrote his 1946 essay, “Why I Write,”6 he identified four human motives: ego (the desire to be remembered), aesthetic (finding beauty in language), history (recording truth), and politics (changing the world). Every single motive requires desire, purpose, and most importantly, consciousness. AI has none of these. It cannot reproduce what makes writing matter. It can reproduce the shape of writing but not the soul of it – it’s simply mimicry without meaning.

Writing as a Cognitive Process

Illustration by Robert Nuebecker

Most people misunderstand what writing does. The common consensus is that writing communicates thoughts you already have. It’s about taking clear ideas from your mind and putting them on a page. But it’s never been that simple. Writing is how humans think through language. Orwell wrote because he discovered a lie he wanted to expose, a fact that he wanted to draw attention to, or most simply, a desire to be heard.

I write because I believe that writing is how you discover your thoughts. It involves introspection. When you sit down to write something difficult – whether it’s fiction or an academic essay – your thoughts are never clear-cut. You begin with confusion, contradictions, and half-formed opinions. The act of writing forces you to untangle that mess. You figure out your story or argument by trying to articulate it. You discover what you believe when the words on your page stare back at you.

“Writing starts in the brain – it’s a cognitive and emotional process, not just a way to communicate.”

The process of writing involves more than just motives, desires, and purpose – it’s a dynamic flow of thoughts, reflections, and inner-conflict that ultimately make writing human. AI cannot replicate this process. ChatGPT doesn’t struggle with an idea, then has a breakthrough while drafting. It doesn’t write a sentence and realise the entire approach was wrong. It calculates probabilities and predicts words. That’s not thinking – it’s statistical pattern-recognition.

This is why AI-generated text can sound proficient but still feel lifeless. The sentences are fine, but they lack substance precisely because there is no thought process behind them. There’s no discovery. No cognitive labour. Just prediction.

Writing Requires Morality and Ethics

Consider this hypothetical: if AI writes something false or harmful, who is held accountable? Not the AI – it doesn’t know what it was saying. It has no understanding of truth, nor awareness of consequences. It was just predicting the next likely set of words. So, can we blame the creator of the AI? Technically, they didn’t write the harmful speech. In this case, who is held responsible?

When you put your name on something you’ve written, you’re making a claim. “I’ve thought about this. This is my opinion. This is what I believe. These are words I stand by.” It means you take full responsibility for what you said.

Scholar John W. Snapper argues that plagiarism (failure to give credit) matters more than piracy (infringement of copyright)7 – precisely because writing involves an ethical dimension. When you pass off someone else’s words as your own, you’re not only stealing content – you’re claiming authorship and responsibility over something that was never yours to begin with. More recently, researcher Radhika Kapur tells us about “writing etiquette”8 – exploring the idea that good writing requires moral awareness, social responsibility, and ethical consideration. These aspects are not just nice additions to writing. They’re part of what writing is.

AI has none of these things. AI cannot take authorship over what it writes. It cannot be held accountable. It writes with complete neutrality, but with no morality or ethics – its prose merely exists in a moral vacuum.

The Beauty in Human Error

Strangely, the writing that really matters is almost always imperfect or messy – sprinkled with mistakes that make it indistinguishably human. As an educator, Hartenberger reveals that students most suspected of cheating are often those who produce work with “a total absence of typos and grammatical flubs.” Yet, perfection is almost never the outcome. Sure, some of us can certainly get close – many write to practise, to improve how they write, to “perfect” their voices, their writing techniques, or workflow.

We can strive for perfection, but it’s ultimately our imperfections that make us human.

The imperfection of humans shine through writing. A vulnerable personal essay. A memoir of painful experiences. A novel that tackles a moral dilemma. Writing these things require lived experience. They require a willingness to be imperfect, to say something that might be considered wrong, or to potentially expose oneself.

AI can be confident, but it cannot cast doubt. It can mimic emotion, but it cannot feel it. It can generate emotional stories, but with no experience to back it up. Real writing has power because it’s raw – built on vulnerability and lived experience.

The Reader-Writer Relationship

Texts are produced to be read – that’s the whole point. Whether it’s a flip through a childhood diary or a published journalist piece, words are written to be read. Writing is more than just the text itself, it encompasses a broader relationship: the invisible connection between the writer and the reader.

“When we read … we look for signs of a thinking, feeling person … on the other side.”

Nguyen discusses Allado-McDowell’s books co-written with GPT-3. A key characteristic to note is the clear distinction between human writing (in bold text) and AI writing (in normal text). Nguyen notes that Amor Cringe (2022) does not have this clear distinction, yet as she read, she was trying to separate the co-writers. Even I found myself trying to distinguish the human writing in Amor Cringe. I believe that many of us would – we have an innate desire for human traces in text.

When we read – whether we are conscious of it or not – we look for signs of a thinking, feeling person behind the text. We imagine a human on the other side. Someone who struggled to piece together the best string of words to convey their ideas. We wonder why the author made these choices, their intentions, why they care about the topic, the purpose of their prose. When you read, you engage in this invisible conversation with the writer.

Reception matters just as much as production. This is why AI writing feels so uncanny. You read with empathy, seeking human connection, but there’s nobody there. Just a series of machine-generated words. Even if AI can write in perfect prose, readers still approach it seeking that human touch. AI can certainly mimic human writing in a way that it’s indistinguishable, but it cannot participate in that valuable reader-writer connection.

Why AI Writing Thrives

Despite these fundamental limitations, why does AI writing continue to thrive? Media theorist Michael H. Goldhaber identifies a fundamental area of our digital ecosystem: we live in an “attention economy.”9 In the attention economy, speed and efficiency are key for capturing our (concerningly) shortened attention spans.

Attention is the most valuable resource in this digitally saturated world. Maximising engagement and visibility matter more than meaning. Clicks are constantly being prioritised over connection. Yet, AI writing continues to succeed. Why? It’s quick. It’s efficient. It can churn out blog posts, social media captions, product descriptions, and summaries faster than any human could. In a world overflowing with content, AI-generated writing gains attention, despite lacking human intention or feeling.

What This Means for Us

So, if AI can “write,” what does that reveal about human writing?

Hopefully, the answer is clear: AI proves that assembling grammatically correct, structurally sound text is the “easy” part of writing. It surely makes a great tool for the technical side of writing. But what’s often overlooked is the true essence of the act itself – bringing consciousness, intention, vulnerability, imperfection, and accountability to the page.

“Writing was never just about putting words on a page – it’s a fundamentally human act of cognition, taking responsibility, creating an authentic connection through imperfection, and building a connection with your reader.”

The existence of AI and LLMs shouldn’t make us write less. If anything, it should encourage us to write more. It should only amplify our human desire to write. AI writing is not a replacement, but a reflection of what makes writing real. Rather than fearing or avoiding it, it should encourage us to write more meaningfully, deliberately, and truthfully – it should only deepen our connection to authentic expression through language.

Instead of feeding it prompts, maybe AI should prompt us instead – to remember why real, human writing truly matters.

References

  1. GPT-3. (2020, September 8). A robot wrote this entire article. Are you scared yet, human? The Guardian. https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3 ↩︎
  2. Cahn, A. F. (2020, September 12). A human wrote this article. You shouldn’t be scared of GPT-3. The Guardian. https://www.theguardian.com/commentisfree/2020/sep/12/human-wrote-this-article-gpt-3   ↩︎
  3. Hartenberger, L. (2023, July 25). What AI teaches us about good writing. Noema. https://www.noemamag.com/what-ai-teaches-us-about-good-writing/?src=longreads ↩︎
  4. Nguyen, T. (2023, April 28). The AI writer: The human remains the hallmark to emulate. DIRT. https://dirt.fyi/article/2023/04/the-ai-writer   ↩︎
  5. Nguyen, T. (2023, May 5). The AI reader: Is there meaning in the machine-generated? DIRT. https://dirt.fyi/article/2023/05/the-ai-reader ↩︎
  6. Orwell, G. (1946). Why I write. Orwell Foundation. https://www.orwellfoundation.com/the-orwell-foundation/orwell/essays-and-other-works/why-i-write/ ↩︎
  7. Snapper, J. (1999). On the Web, plagiarism matters more than copyright piracy. Ethics and Information Technology, 1(2), 127–135. https://www.proquest.com/docview/222235198?_oafollow=false&accountid=10382&pq-origsite=primo&sourcetype=Scholarly%20Journals ↩︎
  8. Kapur, R. (2024). Understanding Offline and Online Writing Etiquettes. ResearchGate. https://www.researchgate.net/publication/377158856_Understanding_Offline_and_Online_Writing_Etiquettes ↩︎
  9. Goldhaber, M. H. (1997). The Attention Economy and the Net. First Monday, 2(4). https://firstmonday.org/ojs/index.php/fm/article/download/519/440?inline=1 ↩︎

Related Post