Behind the Scenes Q&As

What actually works: Clare Spencer on generative AI use in newsrooms

As newsrooms test generative AI, the challenge is no longer whether to experiment, but how to do so responsibly. Questions of accuracy, oversight and real newsroom value remain unresolved.

Clare Spencer is a reporter at Generative AI Newsroom and a former BBC journalist.In this Q&A with Storybench, she reflects on where AI has already proven useful, where it introduces risk, and what journalists should prioritize as these tools become more embedded in news production.

This interview has been edited for length and clarity. 

You’ve worked at the BBC’s news innovation team and now at Generative AI Newsroom. What have you learned about the most realistic ways AI can support — not replace — journalists in daily workflows?

I think it’s still all about transcription and translation. 

Speech-to-text is such a powerful innovation for journalists. You’re not just transcribing interviews, you can search them semantically rather than by keyword, which is incredibly useful. When I worked at the BBC World Service, I had to manually transcribe all my interviews. For a single story, transcription alone could take more than eight hours. Now, that time is essentially eliminated.

Transcription helps journalists find stories buried in overwhelming amounts of audio and data. Some newsrooms are already using it to surface stories from public meetings, for example, and I think there’s much more potential there.

Translation is another area that’s been transformative. Before machine translation, we relied on human translators, which were not only expensive but also added about two days to production timelines, especially for radio documentaries. Sometimes stories were dropped entirely because translation would slow things down too much.

Now, it’s not that translators are being replaced. It’s that we’re doing stories we wouldn’t have done before. For example, BBC News Polska allowed the BBC to reach a new audience in Poland, something that wouldn’t have been possible without machine translation.

Machine translation also opens up reporting possibilities in the field, scanning large volumes of documents in another language for tip-offs, then verifying them later. That simply wasn’t possible before. Those two areas, transcription and translation, still excite me the most.

DON’T MISS  Radio storytellers “Finding America” with hyperlocal news experiments

From prototypes that streamlined newsroom tasks to your latest work exploring generative tools, what experiments or tools have genuinely improved reporting or production efficiency?

For me personally, Google Pinpoint has been incredibly helpful. I use it to search through interviews I’ve conducted, especially when I return to them days or weeks later and need to locate a specific moment or quote.

Being able to quickly search across multiple interviews makes it much easier to connect dots and recall what different sources said, without having to manually scan transcripts.

You’ve said early AI adoption in newsrooms has come with both successes and mistakes. What are some key lessons or red flags journalists should keep in mind?

The biggest lesson is that generative AI introduces errors. That’s how large language models are designed, and it doesn’t seem like that’s going away anytime soon. Accuracy is the core value of journalism, so this is a serious challenge.

That doesn’t mean generative AI is useless, it just means we have to be extremely thoughtful about where we use it and where manual work is still better.

A lot of newsrooms talk about having a “human in the loop” to mitigate inaccuracies. But we have to be careful not to pay lip service to that idea. A human in the loop needs the right subject expertise and enough time to check the work properly.

There’s also the question of whether AI actually saves time. In some cases, it may take longer to verify AI-generated content than to produce it from scratch. If someone is already skilled at writing accurately and quickly, AI may slow them down.

However, AI can make sense where skills don’t overlap. A good example is the Australian Associated Press using generative AI to draft alt text for infographics, descriptions for blind and low-vision readers. The people checking that alt text are infographic editors who know the subject matter but aren’t necessarily writers. In that case, AI helps because it generates a first draft that they can accurately review.

DON’T MISS  AI Takes Center Stage  Summer Cinema’s Three Big Moments 

So accuracy is the biggest risk, and mitigating it requires more than just assigning oversight. You have to consider who the human is, what skills they bring and whether the workflow genuinely makes sense.

Looking ahead, what skills or mindsets should young journalists develop to stay relevant in an AI-enhanced newsroom?

Critical thinking is essential, always asking questions like: How do you know this? Why are you telling me this?

I also think younger journalists have an advantage in understanding newer formats and platforms. It’s not just about TikTok, but about recognizing where people gather and communicate, whether that’s Twitch, gaming platforms or spaces journalists from legacy media may not instinctively explore.

There are more places than ever to tell stories. Understanding those environments and finding opportunities to do journalism there is a real strength.

Finally, transparency matters. If you’re using AI in a newsroom, you need to be open with your editors about how you’re using it and how you’re mitigating risks. The worst situation is making an error and having AI use the surface afterward without disclosure. 

Leave a Reply

Your email address will not be published. Required fields are marked *

Get the latest from Storybench

Keep up with tutorials, behind-the-scenes interviews and more.