Artificial Intelligence (AI) is no longer just a projection into future uses but a part of business practices. Machine learning (ML) is a tool used by businesses for predictive modeling that is used in an array of industries, from healthcare to finance to security.
The question that businesses have to address is: Are we being careful to not misuse AI by having it reinforce human biases in the training data?
To get insight into the various factors that play into that assurance, Martine Bertrand, Lead AI at Samasource in Montreal shared her thoughts. Bertrand holds a Ph.D. in physics and has applied her scientific rigor to ML and AI.
The Source of Bias
Bertrand concurs with what other experts have pointed out: “The model doesn’t choose to have a bias,” but rather she said it: “learns from the data it is exposed to.” Consequently a data set that is biased toward a certain category, class, gender, or color of skin will likely produce an inaccurate model.
We saw several examples of such biased models in Can AI Have Biases? Bertrand referred to one of the instances, that of Amazon’s Rekognition. It came under fire over a year ago when Joy Buolamnwini focused her research on its effects.
Buolamnwini found that while Rekognition did have 100% accuracy in recognizing light-skinned males and 98.7% accuracy even for darker males, the accuracy dropped to 92.9% for women with light skin and just 68.6% accuracy for darker-skinned women
Despite the demand for its removal from law enforcement agencies, the software remained in use. Bertrand finds that outrageous because of the potential danger inherent in relying on biased outcomes in that context.