Technology and journalism have been yoked together in recent years as media organisations strive to adapt to the demands of a digital marketplace, where the business of attracting eyeballs and maintaining relevance is as much an engineering problem as a storytelling one.
At its best, this marriage of convenience can yield exciting new ways of working and communicating. At its worst . . . well, you’ve seen Frankenstein.
Earlier this month, the Washington Post rolled out a new feature called Your Personal Podcast in its mobile app. It was pitched as a bold blend of technology and audience choice, an AI-generated briefing that let a subscriber choose topics, the length of the episode, the voice of the host and eventually ask questions via the newspaper’s previously launched Ask The Post AI function, which draws on its own archive to offer answers to user queries.
Your Personal Podcast promised a bespoke audio experience. The tool was part of the paper’s effort to, in the preferred current jargon, “meet readers where they are”. But within 48 hours of the launch, journalists inside the paper were flagging mistakes that failed the most basic editorial standard: accuracy.
As Semafor’s Max Tani reported, errors in the podcasts ranged from minor mispronunciations (forgivable enough in an AI-generated voice bot) to far more serious misattributions and invented quotes. In some cases, the AI not only misrepresented what sources had said but also inserted its own commentary, at times making it appear that the Washington Post was taking positions it had never published.
Within the Post’s internal communications channels, editors railed against what they saw as a disregard for the paper’s own standards. “It is truly astonishing that this was allowed to go forward at all,” one editor wrote. Another argued that if the Post were serious about its credibility, it would pull the tool immediately.
That did not happen. Further Semafor reporting revealed an unsettling fact: pre-launch testing by the product team had shown that between 68 per cent and 84 per cent of the AI-generated scripts did not meet the Post’s own editorial standards. But the feature was launched anyway, accompanied by a reassurance that it was “experimental” and labelled as “beta”, with the promise of future improvements.
The episode is a stark illustration of how easily product development, with its focus on getting something to market and then making improvements over time, can collide with the most basic principles of serious journalism. It appears the Post’s management, eager to compete in an era where user attention is fragmented and fleeting, saw in AI a way of supplying news in a format that might appeal to commuters, multitaskers and audio-first consumers. But in the rush to deliver something novel, the company’s core commitments to truth and accuracy were quietly relegated to the fine print.
There is another tension that goes beyond one product misfire. It’s rooted in the broader industrial logic of digital media today, where personalisation is treated as a holy grail. In the world of social platforms and streaming services, algorithms curate ever more tailored experiences in the belief that this drives engagement. The logic is obvious: give people exactly what they want and they will stay longer, click more, maybe even pay.
But this logic collides with a vital but underappreciated part of journalism, which doesn’t exist purely to throw a bucketload of undifferentiated news stories at you and then let your personal algorithm sort it out. Some stories are more important than others. Some subjects are difficult but repay attention. Editorial choices and informed curation still form an essential part of media as a public good.
To have that painstaking process turned into a hallucination-prone bot telling you what it thinks you want to know is a pretty gross betrayal.
The rush to embrace new products invites the critique of innovation for innovation’s sake. If the technology does not demonstrably serve the audience’s needs or uphold the institution’s values, what purpose does it serve? The answer lies in competitive pressure. News organisations are fighting on multiple fronts: declining traditional ad revenues, competition for subscriptions, the gravitational pull of social media and the cultural imperative not to be left behind in the AI arms race.
And what of personalisation itself? Is the ability to tailor news audio really an unalloyed good? There is no inherent ethical problem with giving listeners choice. But personalisation has to be framed as a delivery mechanism rather than a replacement for editorial standards.
There is also the risk that personalisation, driven by AI, chips away further at the idea of a shared information landscape. Journalism, at its best, offers a communal reference point. News stories inform a community rather than just indulging an individual’s preferences. In an increasingly atomised world, personalised feeds and podcasts risk fragmenting any shared understanding, leaving users in bubbles of their own design. That might be commercially attractive to some, but it erodes one of the fundamental social purposes of news: to create a common ground of verified facts.
The misadventure of Your Personal Podcast is a cautionary tale of what can happen when product imperatives overshadow journalism. Technology should serve the mission, not the other way around. Personalisation needs to be a useful tool, not a Trojan horse that undermines trust.
The Washington Post is currently in disarray, with revenues falling and subscribers deserting the paper in droves since its proprietor, Jeff Bezos, tilted it in a more Trump-friendly direction. It will be fascinating to see whether it now decides to retreat and realign its innovation strategy with its standards, or if it intends to keep moving fast and breaking things.

















