AI Round-ups

Your start of the year updates on artificial intelligence in journalism are here

This is third in our series of round-ups on the latest in artificial intelligence and journalism. Here are some recent links you might want to read. These insights are brought to you by Northeastern University data professor Rahul Bhargava.

Public understanding of AI and news

There is growing public concern about the impacts of AI and news, especially around the elections this year (in the U.S. and other countries). The U.S. Senate Judiciary Committee hosted a hearing on “Oversight of A.I.: The Future of Journalism”. Inviting publisher CEOs (and CUNY Prof. Jeff Jarvis), they touched on fair use, Section 230, deep fakes and more. Read a summary of the hearing at Ars Tecnica.

Concerns are growing in the regular public too. Our own Northeastern AI Literacy Lab published a report on how American are thinking about evolving AI technologies. Polling 1,000 people, they found a mix of caution, skepticism, and hope. A surprising 77% of respondents read news about AI at least once a week. Read the full report on “How Americans see AI: Caution, skepticism, and hope“.

Media outlets continue to try AI out

The variety of experimentation with AI technologies in news continues to grow. Norwegian daily Aftenposten is using AI to read articles with a synthetic voice, trained on recordings of their podcast host. Early data from their year-long experiment show no different in listen rates between human-voiced audio and generated audio. Read more at PressGazette.

Some of these experiments don’t go so well. The latest AI news scandal comes from Sports Illustrated. Media outlet Futurism found evidence that they had AI-generated fake reporters on their list of authors, and completely generated stories with no labels on them. Once contacted, SI deleted all the articles and blamed a sub-contractor. Read the scandal details on Futurism’s site.

DON’T MISS  Eight interactive stories from Latin America you should check out

In more ethical developments, the BBC laid out three guiding principles for connecting generative AI and their mission: the public’s best interests, prioritizing human creativity, and being transparent. Read their full statement for more on their plans.

Various papers consider copyright concerns

Global publisher Axel Springer signed a multi-deal with OpenAI. The agreement allows OpenAI to use published content as training data, and also will feature their publication’s content in results created by ChatGPT. Read more on Reuters.

On the flip side, the New York Times lawsuit against OpenAI continues to develop, built on the claim that their model’s copyright infringements “threaten high-quality journalism.” OpenAI has now responded, claiming it is merit-less. Read more on the TechCrunch blog.

Prospect’s navel-gazing “Media Confidential” (owned by Axel Springer) podcast interviewed German news leader Mathias Döpfner and had a short segment about what he thinks the future holds for AI and Journalism. His comments touched on legal frameworks and automation of routine journalism tasks. Read the transcript or listen to the whole interview on the Prospect site.

Rahul Bhargava
Latest posts by Rahul Bhargava (see all)
DON’T MISS  New Strategies for Using Artificial Intelligence in Journalism

Leave a Reply

Your email address will not be published. Required fields are marked *

Get the latest from Storybench

Keep up with tutorials, behind-the-scenes interviews and more.