Newsroom automation, election forecasting and U.X. at the New York Times: Keynotes at C+J 2019.

Newsroom automation. Election forecasting. Personalized user experience. Three keynote speakers dove into these technological advances among the media industry’s top innovators at the 2019 Computation + Journalism Symposium in Miami last month. Here’s what stuck out to us.

Lisa Gibbs on the automated newsroom at the AP

Lisa Gibbs is the point person on the Associated Press automation and artificial intelligence strategy group and talked about the news agency’s moves toward automation.

AP decides what automation projects to work on based on a few simple questions: What routine tasks are occupying journalists’ time that are not journalism, but more centered on production? Is there news AP could break faster, or new kinds of content it could generate?

They began automating content production in 2014. A startup called Automated Insights enabled AP to go from writing 300 stories per year on companies’ earnings to automating the production of 3,700 of these stories. This both expanded their coverage and saved business writers and editors immense amounts of time on formulaic—yet important—stories.

“When you’re putting out a million photos a year or 2,000 stories a day, you can imagine the kind of scale that if you can take even a few minutes of time off the creation of one of those pieces of content, those savings add up substantially,” Gibbs said during the keynote.

Gibbs discussed other elements of automation AP has implemented or is researching, including real-time transcription, retrieving archives relevant to the day’s news, and distributing stories to different segments of its audience. AP is also building a cloud-based tool, AP Verify, that will use machine learning to help journalists find information’s original source.

At AP, Gibbs said there wasn’t much fear of robots taking reporters’ jobs, but there was contention around change. To garner newsroom support, Gibbs said it was important to inform everyone early on in the process, starting with influencers.

“You have to recreate your processes and bring people along accordingly,” Gibbs said. “Some of you may have heard that journalists aren’t always eager for change. And yes, having a process for communicating the change, involving people in the change, and increasing adoption is really important for success. You have to think about these issues and have a strategy around that.”

Yphtach Lelkes on probabilities in elections and how data journalism failed democracy

The 2016 election caused speculation that when people are overconfident a candidate will win, they don’t vote.

Keynote speaker Yphtach Lelkes and his colleagues found evidence that supported this speculation by conducting studies to look at how probability forecasts affect perceptions and confidence, and how that confidence impacts voter turnout.

An assistant professor at Annenberg School for Communication at the University of Pennsylvania, Lelkes researches digital media’s influence on political attitudes. He honed in on probability forecasts because they are gaining popularity, despite Americans’ general inability to understand them.

“If there’s a 30 percent chance of rain, we’re probably going to bring an umbrella,” Lelkes said. “If Hillary Clinton has a 30 percent chance of losing, I’m not sure that all people know that this is a relatively good chance of losing.”

DON’T MISS  Newsroom: Game On

One of Lelkes’ studies looked at how confident people were that a candidate would win or lose a race based on vote share (how many votes candidates are expected to receive) versus win probability (the probability a candidate receives more than 50 percent of the vote share).

He found that people were consistently more certain of an election’s outcome when they saw win probabilities than when they saw vote share. Evidence suggested that people were even more certain when their preferred candidate was in the lead. Increasingly higher probabilities of a candidate winning also made people incorrectly think the election would be a landslide victory.

Lelkes conducted another study to look at whether this misplaced confidence could lower voter turnout. Early calls have historically lowered voter turnout, such as during elections in the 80s, Lelkes said. “When races are closer, more people turn out,” he said.

Through his own study that attached a small cost to a voting scenario, Lekles found that probabilistic forecasts drove down voter turnout while vote share reports had almost no impact. Knowledge of the percent of votes expected to go to their favorite candidate didn’t impact whether or not an individual voted. On the other hand, when presented with seemingly high probabilities of their preferred candidate winning, people voted less often.

Lelkes said these results transcended group size and individuals’ understandings of probabilities. In a follow-up study he gave participants a Berlin Numeracy test to gauge how well they understood probabilities, but found that those who did understand probabilities better didn’t actually behave differently.

Lelkes said his studies—alongside supplemental evidence—support the speculation that probability forecasts affected people’s perceptions enough to affect the outcome of the 2016 election. Major forecasting sites were visited predominantly by liberal audiences. Liberals were more likely to be misled because Clinton was always portrayed as being ahead. Liberals were also more likely to say the race would be a blowout, potentially driving down voter turnout.

“Probabilistic forecasts confuse the public and boost perceptions of blowout elections,” Lelkes said. “It’s still, of course, hard to say whether this actually affected 2016 or not. It’s hard to extrapolate lab results to the real world. So what is needed is really a field experiment or something in the field where we look at the real impact of these things.”

Lelkes advised journalists that horse race coverage is bad for democracy, but if there’s no avoiding it, journalists should not use probabilities. And if they must use probabilities, journalists should present them in ways people understand, such as through natural probabilities (7 out of 10 instead of 70 percent chance).

Brian Hammam on digital user experience at The New York Times

Brian Hamman, the vice president of engineering at the New York Times, is responsible for the publication’s user-facing technology. He talked about the ongoing transformation of the Times’ homepage and efforts toward personalization.

DON’T MISS  What does Facebook Live mean for journalism?

The Times’ homepage has always been very carefully crafted. As the company honed its strategy on subscribers and a streamlined product, its leaders approached the user experience “as a complete fundamental change for how we work as a news organization, curate our content, present that to readers, and how readers understand what the New York Times is,” Hamman said.

The end goal is to create experiences personalized for individual readers, and to inform those personalized views with editorial judgement. “That big breadth of content is something that we need to help readers navigate and get through,” Hamman said. So the Times created a team of designers, editors, data scientists, engineers, and product managers to plan and carry out that institutional change.

Rather than thinking about the homepage as a complete product, they wanted to start with individual stories and let the sections and structure emerge and build from them.

The Times’ traditional desk/section structure is not necessarily mapped to the way readers want to get information, Hamman said. Many readers don’t read NYT specifically for technology, science, or sports news. “They have a few minutes at the Chipotle line, so they want to know what’s happening right now, or they want to find something interesting or new,” he said.

But breaking online sections into more personalized and topic-centered blocks has created contention in the newsroom. Editors flocked to put their content in “Top News”—the one section that would remain unpersonalized and consistent—knowing their content would not fall into various algorithmically determined sections.

This caused the “Top News” section to grow, driving down visits to ads and other content that was pushed further down the page. It also meant the Top News section was constantly hard news, and was often visually a “collection of old white men” because of endless stories on President Donald Trump and other political controversies, Hamman said. The depth of coverage of these stories overcame the breadth of the Times’ overall content.

So, Hamman said, The Times moved toward a package-centered design. Two to three stories and images are algorithmically grouped together based on common emphases, then placed into a template on the homepage under a title that hints more to what the stories are about.

Hamman said this intends to create and deliver more comprehensive stories in less space, opening the page to show more of the Times’ breadth of coverage.

The Times still has yet to personalize homepage content, but has taken years setting the groundwork to algorithmically segment articles into packages and sections.

Leave a Reply

Your email address will not be published. Required fields are marked *

Get the latest from Storybench

Keep up with tutorials, behind-the-scenes interviews and more.