Technology companies and journalists must build public trust of AI together, experts say
“Can we trust AI?” asked Rupal Patel, a Northeastern University professor who also founded synthetic voice company VocaliD. “Who should take responsibility for AI?”
Northeastern hosted a celebratory event for the new AI Literacy Lab, Oct. 18. Even after two hours of questions with artificial intelligence scientists and journalists, there were no simple answers to these two questions. According to the panelists, the question cannot be simply answered because of the technology’s rapid development.
“Three years ago, people who were doing AI talked about AI,” said Ashish Jaiman, the leader of Microsoft Corporation’s generative AI initiatives. “Now, everyone knows about AI. Even the Uber driver I met in India talked about AI to me.”
The other panelists included Cansu Canca, the director of Northeastern’s Responsible AI Practice, John Wihbey, an associate professor of media innovation and technology in the School of Journalism, Nikita Roy, a data scientist and media entrepreneur and Joanna Weiss, the executive director of the AI Literacy Lab.
“AI grows so quickly,” Patel said. “Now, AI has been used in many industries, like healthcare and journalism.”
Along with the wide use of AI in daily life, the panelists discussed the issues of ethics, trust and responsibility. According to a survey conducted by the AI Literacy Lab, 55% of the 1,000 surveyed participants do not feel confident that AI will be developed responsibly. 64% believe governments should regulate AI.
Because AI regulation is still lacking, the panelists argued that the responsibility falls on technology companies and journalists. While Microsoft, for example, has guidelines around using AI, many important but controversial questions remain in gray areas.
“There are many questions [that] need to be discussed,” Jaiman said. “What is harm, how to measure harm and who can be harmed? These answers may also change as AI is evolving.”
Some companies try to convince users that AI is completely ethical but Cansu disagreed with that approach.
“Instead of persuading users that the AI is ethical,” he said, “I believe tech companies should focus on designing ethically, then the trust will gradually come.”
Soon, people may not be able to identify whether what they are consuming is generated by AI. With all the deep fakes and misinformation out there, a necessary and urgent question is how to build and rebuild public trust.
“I don’t think there will be a standardized AI guideline for journalists,” Roy said. “But it is all about transparency.”
For example, according to the USA Today Network Principles of Ethical Conduct For Newsrooms, if AI-assisted content is approved for publication, journalists must disclose the use of AI and its limitations to their audience. Although the question of public trust still does not have a standardized answer, the collaboration between scientists and journalists could help.
“Tech people love to get [the products] done quickly,” Roy said. “But journalists love asking ethical questions. So, the collaboration is important and necessary.”
- 35+ data visualization tools that The Washington Post uses - April 16, 2024
- Kontinentalist shows the legal battle for women’s divorce rights in the Philippines with visual and data storytelling - February 27, 2024
- Reuters: Journalism and technology trends and predictions - February 15, 2024