[Lex Computer & Tech Group/LCTG] AI bias
Carl Lazarus
carllazarus at comcast.net
Sat Apr 9 10:06:21 PDT 2022
Sometimes it can make sense to use statistical correlations. So, when deciding where to have more police patrols it makes sense to use zip code crime data. When dealing with an individual and deciding whether to parole that person, information about the individual seems far more relevant than information about his zip code.
Carl Lazarus
<mailto:carllazarus at comcast.net> carllazarus at comcast.net
617-964-7241 (H)
From: LCTG <lctg-bounces+carllazarus=comcast.net at lists.toku.us> On Behalf Of Jon Dreyer
Sent: Saturday, April 09, 2022 12:48 PM
To: lctg at lists.toku.us
Subject: Re: [Lex Computer & Tech Group/LCTG] AI bias
Not every instance of apparent bias represents bias. John's example seems like an open-and-shut case of bias, but Jerry's may not be. Since zip codes correlate with poverty and poverty correlates with crime, it wouldn't surprise me if zip codes correlate with crime. (Probably not with amount stolen per capita!) It wouldn't surprise me at all if criminal records of relatives correlate more strongly with crime.
Whether it's ethical to use that data for a given purpose is a separate question.
--
Jon "Bias Sometime" Dreyer
Math tutor/Computer Science tutor <http://www.passionatelycurious.com>
Jon Dreyer Music <http://music.jondreyer.com>
On 4/9/22 11:22, Jerry Harris wrote:
John,
You might find the book by Cathy O'Neil, "Weapons of Math Destruction," interesting since it contains more examples in the same vein as the article. The author attended Lexington High School and gave a talk about her book at Follen Church several years ago. Stories such as an algorithm that uses data like zip code and criminal records of relatives to calculate a prisoner's likelihood to commit another crime if released on parole, etc.
https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418815
Jerry
On Sat, Apr 9, 2022 at 10:47 AM john rudy <jjrudy1 at comcast.net <mailto:jjrudy1 at comcast.net> > wrote:
I am taking an AI course and a major issue is the bias build into some AI systems. If, for example, you pick new hires from those who were successful in the past you’ll mostly hire white males because those were who you mostly had in the past. Here is a fascinating article from the course
https://www.wired.com/story/excerpt-from-automating-inequality/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.toku.us/pipermail/lctg-toku.us/attachments/20220409/ab6da060/attachment.html>
More information about the LCTG
mailing list