NICAR 2022 Tutorials

How To: Use investigative techniques to hold algorithms and artificial intelligence accountable

Editor’s note: This article is from virtual reporting from the livestreamed panel on March 3, 2022.

Many tech companies, including Silicon Valley giants, are beginning to grapple with the ethics of using algorithms, which can lead to unintended biases in society. 

At the NICAR 2022 conference, which began Thursday in Atlanta, journalists Surya Mattu and Hilke Schellmann discussed various methods and insights to hold algorithms and artificial intelligence accountable.

“We focus a lot on complexity of algorithms,” said Mattu, an investigative data journalist and senior data engineer at The Markup. “But the actual harm is in laziness, incompetence, and not following due diligence to make sure data is clean.”

Stress testing systems to dig deep into how they work

AI crawlers can go through people’s calendars to analyze productivity, outlined Schellmann, a journalism professor at NYU and a freelance reporter. In a 4-part series for MIT Technology Review, Schellmann tested algorithms for adversarial cases, which go beyond the boundaries of what people who usually use this technology do. 

Because documentation comes in handy as bullet proof evidence, she investigated a video interview in which she recorded herself, her screen and audio. 

AI systems often fall short in efficiency when used by people with disabilities or those with a different speech pattern. Although the developers claim the algorithms understand all accents and that they have a minimum threshold after which candidates’ profiles are rerouted to humans, Schellmann said it was not the case for the two cases she investigated.

In another test for the AI system, Schellmann used her computer-generated voice to read an answer, which scored 2% higher than when she used her real voice. Companies neither detect this subtlety nor fully understand how the decisions are made, said Schellmann. 

DON’T MISS  Measuring the impact of investigative reporting by local TV news stations

Using your own data to test AI systems

Schellmann mentioned an Israel facial imaging software that claims its algorithm can scan photos and tell if a photo is a pedophile or a terrorist, without much science behind it.

So she tested the system with her own batch of photos of convicted terrorists, and found that while the system correctly detected a few brown-skinned 9/11 hijackers, it did not deem the Caucasian terrorists in the batch as terrorists. This underscores the need for journalists to independently test AI systems as much as possible.

Citizen Browser to audit algorithms used by Facebook and YouTube

Mattu outlined Citizen Browser, which is the first real-time application “to audit the algorithms that social media platforms use to determine what information they serve their users, what news and narratives are amplified or suppressed, and which online communities those users are encouraged to join.”

Mattu explained that users were paid to share data, and an app would run in the background to collect data three times a day. Monitoring the pages and groups being recommended by Facebook gives more insight into the workings of the algorithm, especially because these things aren’t available for access through an Application Programming Interface – a set of definitions and protocols for building and integrating application software

“The idea here was to show that independent verification has value and can provide people with the way to interrogate companies,” said Mattu.

Blacklight to uncover user-tracking technologies on websites

Mattu also spoke about Blacklight, which assesses privacy violations that could be happening on a website in real-time. A user inputs a website URL and the tool lists specific user-tracking technologies that are otherwise hidden. Until last week, over 1.8 million people have scanned websites through Blacklight, Mattu said.

DON’T MISS  How The Pittsburgh Post-Gazette reported stories of 'growing up through the cracks'

In response to a question by an audience member who asked how much the journalists on the panel consider the model architecture for an AI system, Schellmann said, “With deep neural networks, a lot of times developers themselves do not know how algorithms make decisions and that’s a real problem in high stakes situations.”

Leave a Reply

Your email address will not be published. Required fields are marked *

Get the latest from Storybench

Keep up with tutorials, behind-the-scenes interviews and more.