How and why journalists must navigate the uncertainty of political forecasting

Insights

How do journalists explain the world of polling and forecasting to the public? This question came into sharp view after the last two presidential cycles and was discussed in a webinar during the Computational and Journalism Symposium co-hosted by Northeastern and the Brown Institute for Media Innovation in February.

The “Political Forecasting Meets Journalism” event examined how news organizations explain forecasting and polling to a wider audience who may not understand the unreliability and complexities of these predictions. 

Nick Diakopoulos, director of the Computational Journalism Lab at Northwestern University, spoke with professionals from across the media, including David Byler of the Washington Post, Micah Cohen of FiveThirtyEight and Natalie Jackson from the Public Religion Research Institute.

Diakopoulos opened the discussion by giving a broad overview of past and present political forecasting, tracing the history back to 1936 with the arrival of George Gallup’s statistical sampling method. Fast-forward to today, and the controversies over inaccurate predictions in the 2016 and 2020 elections have caused a larger debate over the future of such forecasting. 

Byler spoke about a piece he created for the Post last year which gave users the opportunity to gamify the New Hampshire Democratic Primary election by picking a candidate and walking through different election and polling scenarios. 

“The idea here was to try something new, to put people in charge of the model themselves… It helped people understand uncertainty and it helped people understand what’s important for predicting things,” he said. 

This brief look into Byler’s piece raised what would dominate most of the conversations during the panel: how to teach the public about forecasting uncertainty and what’s important for predicting an outcome. 

Cohen, who started with FiveThirtyEight at its inception in 2010, spoke as an editor rather than as a data scientist or prediction modeler. He spoke to the broader predictions journalists make all the time that don’t fall neatly into data-driven forecasting, like when members of the media laughed at Representative Keith Ellison’s prediction that Donald Trump would win the Republican nomination. 

Cohen argued that journalists need to communicate uncertainty in forecasting in better ways. He said journalists can no longer present forecasts as some bold claim of the future, but “as an expression of our current understanding.” 

Jackson, who is the director of research at the Public Religion Research Institute, spoke about the more technical aspects of polling and forecasting. One of the main issues she raised regarded the very idea of sampling statistics which infers that there is a population actually being sampled. 

“We have this veneer of this strict quantitative, scientific methodology that we’re not even fitting the basic principles of,” she said. “Polls that are designed to capture the general public is something that we can do… as for when we’re getting an election estimate, we have no idea who’s going to vote, we’re guessing.”

Jackson called for mixing in the “art form” with the data.

“We need to drop the veneer of this being a strict scientific exercise,” she said.

Speaking of the subjective nature of polling and forecasting, the panel spoke about the important ethical concern of how these models influence voters’ behavior. Jackson said that forecasting and polling isn’t going anywhere as people want to grab onto something that will tell them what the future looks like.

The panel agreed that forecasting and polling should be led by outlets and individuals devoted to seeking the most reliable data while also being upfront with their audience about its uncertainty. Without ethical journalists and data scientists contributing to this field, the panel argued that it would be filled with people and institutions less committed to journalistic principles.

Leave a Reply