Search This Blog

Showing posts with label bias. Show all posts
Showing posts with label bias. Show all posts

Thursday, May 18, 2023

Did Open AI Open Pandora's Box?


Pandora opening the box that releases  harm into the world. Image at: https://nypl.getarchive.net/media/pandora-opens-the-box-ca3915



OpenAI not only democratized access to AI but popularized it by inviting people to use it for free. Many of us have opened that box, but some of us have been dismayed at the results that speak as if they are objective truth without any accountability for sources of information or explanation for how ChatGPT arrives at its conclusion.

Relying on AI as an objective source of information ignores the fact that it reflects the bias embedded by its human programmers and can reinforce discriminatory effects. The consequences of that can range from biased beauty standards to reinforcing illegal discriminatory practices.  

Now that it just costs $20 a month or can even be available for free at off-peak times, everyone can -- and many are -- make use of ChatGPT instead of doing research in a variety of sources with documentation that offers at least some level of accountability that is essential for explainable AI. 

Read more here:  https://www.linkedin.com/pulse/did-open-ai-pandoras-box-write-way-pro/?trackingId=E3yor3MrHAi2B2HwIdqlVQ%3D%3D


 Related:


An A/B Test of Generative AI


AI' s Got Some Explaining to Do

AI's early attempts at screenwriting

The Pros and Cons of Generative AI

11 Quotes About AI 

AI Informs Personalization for Starbucks

AI Accessibility: The Next Spreadsheet Revolution for Modern Business? 

 


http://uncommoncontent.blogspot.com/2021/01/the-original-selection-of-11-ai-quotes.html

 

Monday, August 17, 2020

Diversity produces better quality for AI

Artificial Intelligence (AI) is no longer just a projection into future uses but a part of business practices. Machine learning (ML) is a tool used by businesses for predictive modeling that is used in an array of industries, from healthcare to finance to security.
The question that businesses have to address is: Are we being careful to not misuse AI by having it reinforce human biases in the training data?
To get insight into the various factors that play into that assurance, Martine Bertrand, Lead AI at Samasource in Montreal shared her thoughts. Bertrand holds a Ph.D. in physics and has applied her scientific rigor to ML and AI.

The Source of Bias

Bertrand concurs with what other experts have pointed out: “The model doesn’t choose to have a bias,” but rather she said it: “learns from the data it is exposed to.” Consequently a data set that is biased toward a certain category, class, gender, or color of skin will likely produce an inaccurate model.
We saw several examples of such biased models in Can AI Have Biases? Bertrand referred to one of the instances, that of Amazon’s Rekognition. It came under fire over a year ago when Joy Buolamnwini focused her research on its effects.
Buolamnwini found that while Rekognition did have 100% accuracy in recognizing light-skinned males and 98.7% accuracy even for darker males, the accuracy dropped to 92.9% for women with light skin and just 68.6% accuracy for darker-skinned women
Despite the demand for its removal from law enforcement agencies, the software remained in use. Bertrand finds that outrageous because of the potential danger inherent in relying on biased outcomes in that context.

Wednesday, February 22, 2017

Shining light on the dark side of big data

Does the shift toward more data and algorithmic direction for our business decisions assure us that organizations and businesses are operating to everyone's advantage? There are a number of issues involved that some people feel need to be addressed going forward.
Numbers don't lie, or do they? Perhaps the fact that they are perceived to be absolutely objective is what makes us accept the determinations of algorithms without questioning what factors could have shaped the outcome.
That's the argument Cathy O'Neil makes in Weapons of Math Destruction: How Big Data Increases Inequality and Threatens DemocracyWhile we tend to think of big data as a counterforce to biased, just decisions, O'Neil finds that in practice, they can reinforce biases even while claiming unassailable objectivity.
 “The models being used today are opaque, unregulated, and uncontestable, even when they’re wrong.”   The math destruction posed by algorithms is the result of models that reinforces barriers, keeping particular demographic populations disadvantaged by identifying them as less worthy of credit, education, job opportunities, parole, etc. 

Now the organizations and businesses that make those decisions can point to the authority of the algorithm and so shut down any possible discussion that question the decision. In that way, big data can be misused to increase inequality. As algorithms are not created in a vacuum but are born of minds operating in a human context that already has some set assumptions, they actually can extend the reach of human biases rather than counteract them.  

“Even algorithms have parents, and those parents are computer programmers, with their values and assumptions, “Alberto Ibargüenhttps://www.knightfoundation.org/articles/ethics-and-governance-of-artificial-intelligence-fund,  president and  CEO and of the John S. and James L. Knight Foundation wrote.  “As computers learn and adapt from new data, those initial algorithms can shape what information we see, how much money we can borrow, what health care we receive, and more.”

I spoke with the foundation’s VP of Technology Innovation, John Bracken about its partnership with the MIT Media Lab and the Berkman Klein Center for Internet & Society as well as other individuals and organizations to create a $27 million fund for research in this area. 
The idea is to open the way to “bridging” together “people across fields and nations” to pull together a range of experiences and perspectives on the “social impact” of the development of artificial intelligence. As AI is on the road “to impact every aspect of human life,” it is important to think about sharping policies  for the “tools to be built” and how they are to be implemented.
Read more in 

Algorithms' Dark Side: Embedding Bias into Code