AI is gobbling up journalism for many of the same reasons humans do: to develop a concrete understanding of the world; to think critically; to differentiate between what is true and what’s not; to become a better writer; and to distill history and context into something accessible. But what happens to AI when our journalistic institutions crumble? Upon what foundation of truth will it answer everyone’s questions? Write their emails? Do their jobs? Because while the alarm bells have been ringing for journalism for decades, the so-called end of search feels like the potential death knell. What does that mean for AI, and for us as we try to make sense of an increasingly confusing world?
In our rush to integrate generative AI into every corner of our lives, we’ve ignored a fundamental truth: AI cannot function without a baseline of verified facts. And, at the moment, that baseline is built and maintained by so-called “traditional” journalism (the kind with fact checkers and editors). As AI threatens to upend search, media monetization, and news consumption behaviors, it’s also undercutting the very industry that feeds it the facts it depends on. A society cannot function without objective journalism, and neither can AI.
Loss of accuracy
Recent Apple research says that, “It doesn’t take much to cause generative AI to fall into ‘complete accuracy collapse.’” It goes on to show that generative AI models lack strong logical reasoning, unable to function beyond their complexity threshold. I immediately thought of a recent piece from The New Yorker, in which Andrew Marantz weaves together various examples of autocracy, set against thousands of years of history, to (attempt to) make sense of what is happening in America right now. I imagined AI trying to do the same, essentially short-circuiting before being able to form the salient points that make the piece so impactful. When asked to think too hard, the AI breaks.
An even more damning report from the BBC reports that AI can’t accurately summarize the news. It asked ChatGPT, Copilot, Gemini, and Perplexity to sum up 100 news stories and asked expert journalists to rate each answer. “As well as containing factual inaccuracies, the chatbots ‘struggled to differentiate between opinion and fact, editorialised, and often failed to include essential context,’” says the report. Almost one-fifth of the summaries included false facts and quote distortions—19%!
There’s more, of course. This study from MIT Sloan shows that AI tools have a history of fabricating citations and reinforcing gender and racial bias, while this Fast Company article argues that AI-driven journalism’s “good enough” standards are accepted because of the revenue these tools create.
And that, of course, is the less human reason AI is gobbling up journalism: the money. None of that money is going back into funding the journalistic institutions that power this whole experiment. What happens to our society when the core pillar of a true and free press collapses under the weight of the thing that has sloppily consumed it? Our AI lords must place real value on fact-checked reporting—right now—to ensure its continued existence.
Josh Rosenberg is CEO and cofounder of Day One Agency.
No comments