Early Fall 2025: Journalism faces new AI crossroads
Journalism’s AI summer is over, and the reckoning has begun. Fast-moving trials gave way to uneasy debates. Now fall brings something sharper: clarity on what artificial intelligence actually means for the news business. Newsrooms are building chatbots. French unions are securing direct payouts for reporters. Misinformation is targeting journalists themselves. Legal settlements are reaching into the billions. This isn’t dabbling anymore—it’s a full-blown fight over who profits, who protects, and who gets left behind. And we’ve got the scoop you need to catch up. Let’s dive in!
Newsroom experiments stretch from local to global
In the U.S, a UNC study found four small newsrooms built AI chatbots in less than a month using no-code platforms like Zapier. The cost was around $40 a month, but the return was mixed. Readers rarely used the bots, and when they did, the responses were sometimes outdated or wrong. The takeaway: speed and affordability do not guarantee engagement.
Meanwhile, across Europe, the Middle East and Africa, outlets are also testing conversational AI. Newsrooms are creating bots to help audiences sift archives or answer questions about local issues. While some see these tools as a bridge to younger readers accustomed to messaging apps, editors remain cautious about accuracy and tone. The global picture is now one of low-cost innovation paired with unresolved editorial risk.
Revenue and rights: Who gets paid?
Money is also beginning to shift in ways unthinkable just a year ago. In France, unions secured deals that sent 25% of AI licensing revenue directly to reporters. Le Monde’s contract has become the clearest example of transparency in distributing money from OpenAI, Perplexity and other licensing partners.
In the U.S., by contrast, no collective agreements require publishers to share AI revenue. Union leaders say contracts are silent and companies rarely disclose how much they earn from licensing. The result is frustration among young journalists who feel their work is being monetized without them seeing a share of the profit.
Meanwhile, Perplexity said it will begin sharing ad revenue with publishers featured in its AI search. Details are still vague, but the move could change relationships between aggregators and news outlets if the payments prove meaningful. And in a landmark legal decision, Anthropic agreed to pay $1.5 billion to authors whose works were used without their permission. The scale of settlement shows how quickly the financial ground surrounding AI and content is shifting.
Misinformation targets journalists
AI’s darker potential is becoming harder to ignore. NBC News reported a surge of fake videos impersonating Spanish-language journalists. These clips, created with generative tools, spread misinformation while exploiting the credibility of Latino reporters. The attacks underline how journalists themselves are now becoming direct targets of synthetic content.
At the same time, industry observers warn of “cautionary tales.” Case studies show newsroom AI tools mislabeling content, amplifying bias and confusing audiences. Editors stress that verification and accountability must stay central as adoption expands. Without safeguards, they say, AI could weaken the very foundation of trust that journalism is built upon.
New tools and old lessons
Meanwhile, reporters on city hall beats are finding AI useful in more practical ways. Platforms like SeeGov, Assembly and LocalLens help track long public meetings, flagging notable exchanges and cutting down on hours of review. But even here, the same caution applies: automated transcripts require close checking, and reporters cannot outsource their judgement.
Against this complex backdrop , other voices are chiming in to remind the industry that editorial choices still matter. Techmeme, a headline aggregator, turned 20 this month and remains central in deciding what the tech world pays attention to. At the Global Investigative Journalism Conference, Karen Hao urged peers to focus on ethics: transparency, privacy, and accountability must guide AI’s adoption in journalism. Together, they seek to remind us that while tools evolve, executive editorial judgements remain of central importance.
Looking ahead
Early fall 2025 shows a field in transition. Revenue deals are surfacing. Legal frameworks are taking shape. Newsroom tools are multiplying. But misinformation and ethical landmines might be multiplying even faster. It seems that the question was never whether AI belongs in journalism, but rather, on whose terms will it stay.
- Early Fall 2025: Journalism faces new AI crossroads - October 9, 2025
- What Can GenAI (Really) Do For Data Visualization? - September 11, 2025