This article was originally published at the Reynolds Journalism Institute and republished on IJNet with permission.
AI is human-designed, so it’s no surprise that it reflects a human bias.
In 2020, Microsoft laid off dozens of newsroom workers and replaced them with AI. Unfortunately, they did not take into account issues of bias in the algorithms and the frequent inability of algorithms to distinguish people of color. Shortly after the debut of Microsoft’s robot reporters, the news-skimming algorithm published an article on MSN about Little Mix’s Jade Thirwell and her personal thoughts on racism… complete with a picture of the wrong person. Instead of a photo of Thirwell, the algorithm posted a photo of his girlfriend Leigh-Ann Pinnock.
The inability of AI to recognize the faces of people of color is a matter of great concern. In 2021, the documentary Coded Bias followed Joy Buolamwini, a computer scientist at the MIT Media Lab after she made the startling discovery that AI facial recognition software could not detect dark-skinned faces or recognize women accurately.
Why are algorithms racist?
In his book, Automated (Un)Intelligence, Data journalism professor Meridith Broussard explains that the term machine learning is somewhat misleading on its own. When computer scientists say that AI applications “learn”, they don’t quite mean learning in the human sense. The AI learns from training data – large data sets that teach it statistical patterns in the world. Basically, AI learns to solve problems better and faster because it can predict what will happen from the data it learns from. However, the result of this is that the machine misses many nuances of human intelligence and communication – for example, it probably won’t be able to detect sarcasm or figures of speech.
Also, AI is created by humans – and humans have biases. So if a dataset reflects human biases, the AI will produce a biased output. So, for example, when Amazon used AI to sift through resumes and screen applications, it was quickly discovered that the algorithm was sorting out female resumes.
The algorithm was trained on the resumes of successful employees and Silicon Valley is not known for its gender diversity. So the app started rejecting CVs containing female language, penalizing CVs containing the word “women” and CVs containing the names of certain universities for women. The result was that Amazon had to stop using the app soon after introducing it. Although the tool was modified to make it more neutral, there was no way to verify that it would not be discriminatory again – so it was not reused.
How can I use AI while accounting for bias?
While it’s important to consider the drawbacks of algorithmic bias, we don’t have to dismiss all AI. Responsible use of AI means recognizing that humans feed their biases to machines and that we still need human intervention in many cases. In the case of the AI editor who posted the wrong black woman’s photo, the error could have been avoided if a human editor had simply checked the post.
Thus, coming with an understanding of algorithmic bias is useful for the newsroom looking to add machine learning applications to certain aspects of the news cycle. It is unlikely that AI will be able to replace human journalists at any time in the future, which is consistent with responses from local news decision makers who participated in the Associated Press study regarding the use of AI by local newsrooms.
Here are ways to account for algorithmic biases when using AI in the newsroom:
- Confirm spelling of names in transcripts;
- Use a human fact checker;
- Make sure the photos are of the right people before posting the story;
- Regularly audit AI applications for bias.
AI works better as an assistant than an uncontrolled agent. While the potential of algorithmic applications in the newsroom is still a developing area, we can start by developing a basic understanding of how they work and how to engage in better service journalism with the support of the technology.
Photo by Resource Database™ on Unsplash.
#Journalists #Fight #Biases