Close Menu
    Facebook X (Twitter) Instagram
    TRENDING :
    • The future of AI in schools isn’t personalized learning
    • How new perspectives come from moonwalking
    • Snap layoffs today: 16% of jobs cut as CEO Evan Spiegel is the latest to tout AI advances
    • With 7 short words, the CEO of United Airlines just taught a brilliant lesson in leadership
    • Disney begins laying off 1,000 employees. Here’s who will be affected
    • Quantum computing stocks are back on the rise. Here’s why IONQ, QBTS, RGTI, and QUBT are up
    • Hungary 3rd Time A Charm?
    • The padel app turning matches into meet-cutes
    Populist Bulletin
    • Home
    • US Politics
    • World Politics
    • Economy
    • Business
    • Headline News
    Populist Bulletin
    Home»Business»Inside the quiet takeover of local journalism by AI
    Business 6 Mins Read

    Inside the quiet takeover of local journalism by AI

    Business 6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
    Follow Us
    Google News Flipboard
    Share
    Facebook Twitter LinkedIn Pinterest Email

    The most obvious use case for generative AI in editorial operations is to write copy. When ChatGPT lit the fuse on the current AI boom, it was its ability to crank out hundreds of comprehensible words almost instantly, on virtually any topic, that captured our imaginations. Hundreds of “ChatGPT wrote this article” think pieces resulted, and college essays haven’t been the same since.

    Neither has the media. In October, a report from AI analytics firm Graphite revealed that AI is now producing more articles than humans. And it’s not all content farms cranking out AI slop: A recent study from the University of Maryland examined over 1,500 newspapers in the U.S. and found that AI-generated copy constitutes about 9% of their output, on average. Even major publications like The New York Times and The Wall Street Journal appear to be publishing a minimal number of words that originated from a machine.

    I’ll come back to that, but the big takeaway from the study is that local newspapers—often thought to be the crucial foundation of free press, and still the most trusted arm of the media—are the largest producers of AI writing. Boone Newsmedia, which operates newspapers and other publications in 91 communities in the southeast, is a heavy user of synthetic content, with 20.9% of its articles detected as being partially or entirely written with AI.

    {“blockType”:”creator-network-promo”,”data”:{“mediaUrl”:”https://images.fastcompany.com/image/upload/f_webp,q_auto,c_fit/wp-cms-2/2025/03/mediacopilot-logo-ss.png”,”headline”:”Media CoPilot”,”description”:”Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com”,”substackDomain”:”https://mediacopilot.substack.com/”,”colorTheme”:”blue”,”redirectUrl”:””}}

    Why local papers rely on AI

    Putting aside any default revulsion at AI content, this actually makes a lot of sense. Local news has been stripped down to the bone in recent years as reader attention has fragmented and advertising dollars have shrunk. A great deal of local papers have folded (more than 3,500 since 2005, according to Medill School of Journalism at Northwestern University), and those that remain have adopted other means to survive. In smaller markets, like my New Jersey town, it’s not uncommon for the community paper to republish press releases from local businesses.

    The fact is, writers cost money, and writing takes time. AI, of course, radically alters that reality: for a $20 a month ChatGPT subscription, you now have a lightning-fast robot writer, ready to tackle any subject. Many unscrupulous people treat this ability as their own room full of monkeys with typewriters, cranking out articles just to attract eyeballs—the definition of AI slop.

    But there’s a difference between slop and AI-generated copy written to inform, with the proper context, and edited by a journalist with the proper expertise. In a local news context, the use case for AI writing that’s most often cited is the lengthy school board meeting that, if covered, would take a reporter several hours of listening to transcripts, synthesizing, and contextualizing just to cover what happened. With AI, those hours compress to minutes, freeing up the reporter to write more unique and valuable stories.

    More likely, of course, is that the reporter no longer exists, and an editor or even a sole proprietor simply publishes as many pieces as they can that serve the community. And while it’s not the ideal, I don’t see what’s wrong with that from a utilitarian perspective. If the copy informs, a human has done a quality check, and the audience is engaging with it, what does it matter whether or not it came from a machine?

    AI mistakes hit different

    That said, when mistakes happen with AI content, they can undermine a publication’s integrity like nothing else. This past summer, when the Chicago Sun-Times published a list of hallucinated book titles as a summer reading list, it caused a national backlash. That’s because AI errors are in a different category—since AI lacks human judgment and experience, it makes mistakes a human never would.

    That’s the main reason using AI in copy is a risky business, but safeguards are possible. For starters, you can train editors to catch the mistakes that are unique to AI. Robust fact-checking is obvious, and using grounded tools like Google’s NotebookLM can greatly reduce the chance of hallucinations. Besides factual errors, though, AI writing has many telltale quirks (repeated sentence structures, dashes, “let’s delve . . .,” etc.). I call these “slop indicators,” and, while they’re not disastrous, their continued presence in copy is a subtle signal to readers that they should question what they’re reading. Editors should stamp them out.

    Which is not to say publications shouldn’t be transparent about the use of AI in their content. They absolutely should. In fact, I’d argue being as detailed as possible about the AI’s role at both the article level and in overall strategy is crucial in maintaining trust with an audience. Most editorial “scandals” over AI articles blew up because the copy was presented as human-written (think about Sports Illustrated‘s fake writers from two years ago). When the publication is upfront about the use of AI, such as ESPN’s write-ups of certain sports games, it’s increasingly a non-event.

    Which is why it’s confusing that some major publications seem to be publishing AI copy without disclosing its presence. The study claims that AI copy is showing up in some national outlets, including the New York Times, the Washington Post, and The Wall Street Journal. This appears to be a similar, if smaller scale, issue as the Sun-Times incident: Almost all of the instances were in opinion pieces from third parties, though it appears to be happening around 4–5% of the time.

    That suggests third parties are using AI in their writing process without telling the publication. In all likelihood, they’re not aware of the outlet’s AI policy, and their writing contracts may be ambiguous. However, it’s not like the rest of the content was totally immune from AI writing; the study revealed it to be present 0.71% of the time.

    Getting ahead of AI problems

    All of this speaks to the point about transparency: be straight with your audience and your staff about what’s allowed, and you’ll save yourself headaches later. Of course, policies are only effective with enforcement. With AI text becoming more common and more sophisticated, having effective ways of detecting and dealing with it is a key pillar of maintaining integrity.

    And dealing with it doesn’t necessarily mean forbidding it. The reality is AI text is here, growing, and not going away. The truism about AI that’s often cited—that today is the worst it will ever be—goes double for its writing ability, as that is at the core of what large language models do. Of course, you can bet there will be train wrecks over AI writing in the future, but they won’t be about who’s using AI to write. They’ll be about who’s doing it irresponsibly.

    {“blockType”:”creator-network-promo”,”data”:{“mediaUrl”:”https://images.fastcompany.com/image/upload/f_webp,q_auto,c_fit/wp-cms-2/2025/03/mediacopilot-logo-ss.png”,”headline”:”Media CoPilot”,”description”:”Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com”,”substackDomain”:”https://mediacopilot.substack.com/”,”colorTheme”:”blue”,”redirectUrl”:””}}



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    The future of AI in schools isn’t personalized learning

    April 15, 2026

    How new perspectives come from moonwalking

    April 15, 2026

    Snap layoffs today: 16% of jobs cut as CEO Evan Spiegel is the latest to tout AI advances

    April 15, 2026
    Top News
    Business 5 Mins Read

    Why high-speed rail may not work the best in the U.S.

    Business 5 Mins Read

    High-speed rail systems are found all over the globe. Japan’s bullet train began operating in…

    The Chaos, Confusion & Israel’s Nuke Option

    March 22, 2026

    How KitchenAid updated its legendary kitchen mixer without sacrificing its design

    March 31, 2026

    Could the government pull a TV station off the air over its news coverage? Trump’s comments raise the question

    March 17, 2026
    Top Trending
    Business 6 Mins Read

    The future of AI in schools isn’t personalized learning

    Business 6 Mins Read

    At first blush, it sounds too good to be true: a learning…

    Business 5 Mins Read

    How new perspectives come from moonwalking

    Business 5 Mins Read

    I had a student visit my office hours recently looking for career…

    Business 2 Mins Read

    Snap layoffs today: 16% of jobs cut as CEO Evan Spiegel is the latest to tout AI advances

    Business 2 Mins Read

    On Wednesday, April 15, Snap CEO Evan Spiegel announced in a letter…

    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    About us

    The Populist Bulletin was founded with a fervent commitment to inform, inspire, empower and spark meaningful conversations about the economy, business, politics, government accountability, globalization, and the preservation of American cultural heritage.

    We are devoted to delivering straightforward, unfiltered, compelling, relatable stories that resonate with the majority of the American public, while boldly challenging false mainstream narratives that seem to only serve entrenched elitists, and foreign interests.

    Top Picks

    The future of AI in schools isn’t personalized learning

    April 15, 2026

    How new perspectives come from moonwalking

    April 15, 2026

    Snap layoffs today: 16% of jobs cut as CEO Evan Spiegel is the latest to tout AI advances

    April 15, 2026
    Categories
    • Business
    • Economy
    • Headline News
    • Top News
    • US Politics
    • World Politics
    Copyright © 2025 Populist Bulletin. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.