Ask people whether they want to read AI-generated content and the answer is fairly clear. Only 12% of readers say they are comfortable with AI-generated news, and 90% of Americans believe news organizations should be required to disclose when AI was used to produce stories, according to 2026 media research. The preference for human-written content appears decisive.

Then you run a blind test, and the picture gets more complicated.

What Happens When Nobody Knows

A 2025 study from Columbia University and the University of Michigan put AI and human writing directly against each other without telling readers which was which. In blind evaluations, lay readers — everyday people rather than literary experts — showed a measurable preference for AI-generated text over writing by MFA-trained human authors across multiple quality dimensions.

Expert readers told a different story. Literary professionals strongly disfavored AI writing for both stylistic fidelity and overall quality, with the gap between expert and lay judgment statistically dramatic. The experts could tell. Most readers couldn’t — and when they couldn’t, they often chose the machine.

A separate study published in a communications journal found a similar split: actual AI narratives were rated more enjoyable in blind conditions, while human narratives scored higher on appreciation — a distinction researchers describe as the difference between liking something and valuing it. Readers found AI content more immediately pleasurable. They found human content more meaningful.

Identical Text, Lower Score, Different Byline

Here is where it gets psychologically interesting. Multiple studies have now found that telling someone a piece was written by AI — even when the text itself is identical — measurably lowers their evaluation of it. In one controlled experiment, participants rated the same literary passages significantly lower when told they came from an AI, and higher when told they came from a human author. The words hadn’t changed. The attribution had.

Researchers call this attribution bias, and it runs deep. A study examining both human and AI evaluators found that humans showed a consistent pro-human bias of nearly 14 percentage points when rating content — even when the AI-generated writing was objectively comparable. What people think they’re reading shapes how much they enjoy reading it, independent of the actual quality of the words.

28% Trust Mass Media. 90% Want AI Disclosed.

The broader context makes this tension more urgent. Only 28% of Americans currently report having a great deal or fair amount of trust in mass media, according to recent polling. At the same time, AI-generated content is proliferating at a rate that outpaces both disclosure and detection. Analysis from Nieman Lab suggests that by 2026, AI-written content will outpace human production not just in low-quality corners of the web but across mainstream channels.

The BBC found in a 2025 study that AI chatbots produced significant errors in news summaries nearly half the time — and that 84% of readers said a factual error would substantially damage their trust in an AI-generated summary. The appetite for disclosure is high precisely because the stakes of being misled feel high.

Cleaner, Faster, and Missing the Point

What the research collectively points to is a gap nobody fully anticipated: AI content can be engineered to be immediately engaging in ways that human writing sometimes isn’t. It can be cleaner, faster, more efficiently structured. In blind conditions, a meaningful portion of readers respond to those qualities positively.

What it tends to lack is the thing readers say they value most — evidence of a specific human perspective, lived experience, and genuine stakes in the ideas being expressed. The distinction between enjoying something and finding it meaningful may be the defining reader experience of the next several years, as the volume of machine-generated text continues to climb and the signals that distinguish human authorship become both more sought-after and harder to verify.

Readers know what they want. Whether they can reliably identify it when it’s in front of them is a separate question — and one the research suggests most of us would get wrong.

Skip to content