Search This Blog

Showing posts with label facial recognition. Show all posts
Showing posts with label facial recognition. Show all posts

Wednesday, May 29, 2019

Ethical concerns raised by AI

Nearly every day, we hear about new advances in AI that enable new ways to monitor activities and people, transforming many processes in our day to day life.
What we may then hear every other day is how AI can exacerbate racial and gender bias and pose a threat to privacy, job security, and economic well being. It could possibly even spark a war in the view of Elon Musk.

AI-powered  facial recognition raises concerns over privacy and bias

As explained in Facial Recognition Concerns: Microsoft's Six Ethical Principles, “The widespread use of Artificial Intelligence-powered facial recognition technology can lead to some new intrusions into people’s privacy.”
Given the ability to capture people’s image and identify them on public streets in the name of security, people are rightfully concerned that they will lose their ability to maintain any privacy. That extends to environments at school and work, as detailed in the article.
A  2018  New York Times article raised another concern with the headline, “Facial Recognition Is Accurate, if You’re a White Guy.” The problem is this:
“The darker the skin, the more errors arise — up to nearly 35 percent for images of darker skinned women, according to a new study that breaks fresh ground by measuring how the technology works on people of different races and gender.”
The source of these figures is Joy Buolamwini, a researcher at the MIT Media Lab, and the founder of theAlgorithmic Justice League (AJL). She has devoted herself to uncover how biases seep into AI and so skew results for facial recognition.
See her TED Talk in this video:
This year, Buolamnwini published the findings of her research with Inioluwa Deborah Raji from the University of Toronto, in  Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products.
According to that study Amazon's Rekognition software also messed up on those who fall out of the white man category. It misidentified women as men almost one out of fives times, according to the study. In addition, it incorrectly identified darker-skinned women as men 31 percent of the time, it says.

Read more in 

Our Brave New World : Why the Advance of AI Raises Ethical Concerns