14 Comments
User's avatar
Robert Armin's avatar

I think it’s a very good parody of both of them, using their frequent ticks and well-known attitudes to say what they might say but without insight they’d actually have.

Hollis Robbins's avatar

Welcome, belatedly, to 2026. As I wrote last year, "Every teacher should be close-reading ChatGPT output every day before breakfast. They should have been doing it since 2023." https://hollisrobbinsanecdotal.substack.com/p/in-sooth-i-know-not-why-i-am-so-sad

Mischa Klenovich's avatar

I suppose it would be more impressive if AI generated a voice unique unto itself. Mimicry for ChatGPT and other AI models is not novel. In fact, it is what they are designed to do. A decent to good writer can do the same thing. That the voices are recognizable as Bloom or as Eagleton is a testament to those writer’s distinct voices, not to a programs ability to mimic them.

On the other hand, I guess the fear is that students can use this as a convincing model for writing papers. However, I would hope most professors could pick up on a few key details: 1) there aren’t any direct quotes. (Bloom loved direct quotes.) 2) as pointed out by pseudodoxia, there is a strange rhythm to each paragraph the gives the lie since each is comes to some conclusion rather than bleeding into the next paragraph smoothly. 3) it is rather heavy-handed. I only read the Bloom but that so many of his key terms are used within such a small space is rather suspect.

Even still, not to be all doom and gloom, this could be useful for teaching students how to close read. Put an actual piece by Bloom next to the AI generated piece and discuss.

Jonathan Bate's avatar

Yes - doing close reading / critical analysis of LLM-generated writing is indeed something I use in teaching. The students enjoy catching out the machine!

graywyvern's avatar

"Lewis did not merely resist influence; he weaponized it." made me laugh

Pseudodoxia's avatar

I don't know if I would be as confident in a blind trial but I think there's a detectable rhythm. People often focus on the sentence-level rhythms but you see them at the argument-level as well, and these are harder for the models to vary. So you get an AI in convincing drag but still with its own characteristic gait.

And, right or wrong, don't you sometimes get frustrated at people for being too knowable? That's what this breeds - I can hardly sustain attention with it because I see the formula just beneath the surface and tire of it speaking the same way line after line.

It's like living in a house where the walls, the furniture, the floors, the ceilings are all covered in the same pattern. You can make it striking and colourful, you can apply it to different materials, but it becomes a nothing and a bore when it infuses everything.

Doubtline's avatar

The titular protagonist of Tarr is basically a frustrated liberal. It’s his antagonist, Otto Kreisler, who is Nietzschean, and not a satirist at all. Funnily enough, this is just the sort of slapdash mistake I can see Eagleton making, but one hardly needs ChatGPT to imagine it.

Jonathan Bate's avatar

Ha! Have to confess I've never read *Tarr*, though my son is a great fan. I always say to my students: I can't stop you using LLMs for research, but if you do, please, please check every assertion for factual accuracy.

Robert Knapp's avatar

Just for kicks, I asked Claude to evaluate what Chat GPT produced. Pretty decent critique:

Your centrifuge/custard line is the right place to start, because it exemplifies the core failure mode. The real Eagleton's metaphors do the characteristic thing of mixing high and low registers — but they work as images. They're precise even when they're absurd. "Reducing its sickly moral custard to something hard, bright, and angular" sounds like an Eagleton sentence the way a parrot sounds like speech. A centrifuge separates components by density; it doesn't transmute substances. The metaphor is incoherent at the level of physics and therefore at the level of thought — but it *sounds* like the kind of thing Eagleton would write, which is exactly the problem.

That said, I'd push back slightly on "hollow" as sufficient diagnosis. The essays are more interesting as failures than that word suggests, because they fail asymmetrically. Both are reasonably competent as survey-level intellectual history of Lewis. The content — Lewis's relationship to Blast, the Debord connection, the Swift comparison, the trajectory from fascist sympathy to recantation — is defensible, even if it's all available in any good encyclopedia entry. What collapses is the stylistic imitation, which is ironic given the common assumption that style is what LLMs do best.

Consider the Bloom more closely. Bloom's critical practice depends on two operations: close reading of specific passages and the tracing of revisionary ratios between a poet and a precursor. This essay does neither. It invokes "the anxiety of influence" as a label, names Swift as a precursor, then never shows the misreading at work. Which lines of Swift does Lewis revise? Through which revisionary ratio — kenosis? tessera? The essay has no idea. It uses "agonistic" and "canonical" as mood-setting adjectives rather than as terms within a functioning critical argument. And "gnostic" appears as a vaguely spiritual intensifier, not in Bloom's specific sense derived from Valentinian theology and the alien God. The vocabulary is right; the intellectual machinery is absent.

The Eagleton has a parallel problem. Real Eagleton builds arguments through dialectical structure — thesis, complication, reversal — with the jokes doing argumentative work, not decorative work. This essay scatters Marxist vocabulary ("commodification," "the logic of capital," "dominant class") without ever constructing a dialectical argument. It gestures at contradiction — "both a refusal and a retreat" — but that's a list, not a dialectic. And Eagleton would be more specific about the material conditions: who funded Blast? What was Vorticism's relationship to the London art market? What class fraction did Lewis actually represent? The essay floats above those questions at the level of generality that Eagleton's method exists specifically to puncture.

What the LLM has produced is two essays that could fool someone who has *heard of* Eagleton and Bloom but hasn't read them carefully — which, depending on your vantage point, is either a parlor trick or a disturbing commentary on how much critical prose already operates at the level of stylistic signaling rather than intellectual work. The most uncomfortable question these pastiches raise isn't about AI. It's about how much academic writing was already probabilistic mimesis before the machines arrived.

Jonathan Bate's avatar

Endorses my view that Claude is better trained than Chat (for this kind of thing). The last two sentences are worryingly true!

Robert Knapp's avatar

Probabilistic mimesis: impressive at first sight, but often hollow and without genuine reference or understanding. How could a centrifuge produce something "hard and angular"? ("He wished to whirl the age in a centrifuge, reducing its sickly moral custard to something hard, bright, and angular.")

Jeff Browitt's avatar

I asked ChatGPT to write a 300-word essay on the following question: "How can ChatGPT writing be identified as auto-generated?". Summary of the response (all quotes):

"Regular structure of the writing. AI-generated essays often follow a predictable pattern ... Another sign is the overly neutral tone of the writing. ChatGPT usually avoids strong opinions or emotional language ... AI writing also tends to include general statements instead of specific details ... A human writer may describe personal experiences, real examples, or vivid observations ... This can make the text sound informative but slightly vague ... Another indicator is very consistent grammar and punctuation. While this may seem positive, human writing usually contains small irregularities, such as varied sentence lengths, occasional informal expressions, or minor errors. AI text, in contrast, often appears unusually polished and uniform. Finally, AI-generated essays sometimes repeat ideas using slightly different wording. This repetition can make the text longer without adding much new information."

Since my very question will be drawn into the maw of AI, I imagine in the future AI will work to disguise all this, so I then asked ChatGPT: “Will AI take all this into account in the future to more ‘naturalise’ what it generates?”

Response: “Short answer: yes—AI developers are already trying to do exactly that. But it’s a bit of a cat-and-mouse game. Modern AI systems are trained on huge amounts of human writing, so they already imitate many human habits. Researchers are now working specifically on making generated text sound more “human” in several ways:

1. More variation in style

Future systems can deliberately vary sentence length, structure, and vocabulary. Humans naturally mix short, punchy sentences with longer ones, and AI can be trained to do the same.

2. Less formulaic structure

AI writing today often follows a tidy pattern (introduction → points → conclusion). Newer systems can produce messier, more organic structures—like humans do when thinking through ideas.

3. More specific details

One giveaway of AI text is its tendency toward general statements. Developers are working on models that add specific examples, anecdotes, or concrete descriptions.

4. Controlled imperfections

Ironically, perfectly polished grammar can look suspicious. Future systems could intentionally introduce minor irregularities, informal phrases, or stylistic quirks.

5. Stronger “voice”

Human writers have personality—sarcasm, humour, attitude, personal perspective. AI systems are increasingly being designed to mimic these kinds of voices.

However, there’s an interesting twist. As AI improves at sounding human, AI-detection tools also improve. Many researchers now think reliable detection based only on writing style may become very difficult.

In other words, the future may look like this:

• AI becomes more natural and harder to detect

• detection tools rely more on metadata, drafts, writing history, or classroom supervision rather than style alone

So the basic trajectory is clear: AI writing will keep getting more human-like, and the obvious stylistic clues we see today will likely become less reliable over time."

Interestingly, ChatGPT used the same formulaic response in the second question as the initial question, addressing each of its own original points.