“What can artificial intelligence teach us about fairness?”

Northeastern
Share on FacebookShare on Google+Tweet about this on TwitterPin on PinterestShare on LinkedInEmail this to someone

Artificial intelligence is becoming a central part of people’s lives, even if they don’t realize it. So many everyday functions have an artificial intelligence component – from auto-correct on text messages to map routes, from home loan approvals to Netflix suggestions.

But while that may sound innovative, questions of “fairness” have arisen. For example, when it comes to home mortgages, white males tend to be approved more often and get lower interest rates, compared with others. But is it fair for that group to get an advantage over others? Questions like those are messy and difficult to solve, said David Weinberger, a senior researcher at Harvard’s Berkman Klein Center for Internet & Society.

Weinberger, who has been studying the effect of technology on ideas and human behavior, spoke on Feb. 7 at Pizza, Press & Politics, a lunchtime speaker series at Northeastern University, sponsored by the School of Journalism.

Weinberger said the topic of fairness is incredibly complex. Hundreds of scientists are trying to understand fairness. Yet determining if something is fair or unfair is the easy part, said Weinberger, author of many books, including “Everyday Chaos,” out in May.

“Fairness has value because it is so simple,” he added. The hard part is determining a solution once you decide that something is unfair. “Fairness is not an absolute. It implicates making difficult tradeoffs, and that’s where it gets really complicated.”

For example, the definition of what is “fair” will often be disputed by people who feel the definition works against them.

He explained how computers can reflect bias. People sometimes believe computers are absolutely fair. But computers use algorithms to determine results, such as picking who gets a home mortgage. If the algorithm uses biased choices written into the program by the programmer, then the end result will be biased. The computer system will then reinforce that bias as it systematically repeats the same mistakes, over and over.

Weinberger summarized his research:

  • AI makes us face these questions, but that is only the start.
  • Fairness is usually not a part of the equation. It’s a matter of trade-offs.
  • You can’t correct for one issue without affecting the others.
Andres is a journalism student at Northeastern University.

Leave a Reply