[Lex Computer & Tech Group/LCTG] People shouldn’t pay such a high price for calling out AI harms

Robert Primak bobprimak at yahoo.com
Tue Oct 31 06:17:56 PDT 2023


 Related, and linked in the article:How existential risk became the biggest meme in AI"Ghost stories are contagious."https://www.technologyreview.com/2023/06/19/1075140/how-existential-risk-became-biggest-meme-in-ai
Our quick guide to the 6 ways we can regulate AIhttps://www.technologyreview.com/2023/05/22/1073482/our-quick-guide-to-the-6-ways-we-can-regulate-ai 
Also mentioned in the article, the Overton Window (political science theory):https://www.mackinac.org/OvertonWindowThe article relates this theory to various governments and their efforts to curb the social impacts of technology advances.
When replying, please reference articles or include only brief excerpts. I would not want this email conversation to become a string of entire articles, which makes for a lot of inbox clutter for those of us just trying to scan for the main points people are making. I believe we still have an Extended Topics mailing list for such more in-depth conversations? 
Conor in his presentation so far, has done an excellent job of pointing out how the growing secrecy of how AI and LLM models are being trained has fed a lot of speculation (much of it unfounded) about where and how fast AI and LLM development is going.
A similar fury of speculation is happening surrounding what Microsoft may be doing with Windows 12, over a year before anyone thinks the upgrade will be coming out. If Windows were open source, we could just look at the code as it is being developed, and much of the speculation would be quashed. Then we could plan how to respond to any possible subscription-only or Cloud-only feature updates.
LLMs and AI development are experiencing a lot of speculation based on trying to "read the tea leaves" because the real data and actual code bases are shrouded behind Trade Secrets curtains.
There is much overlap between discussions in our group, Natick FOSS, the Chicago Computer Society and APCUG (a nation-wide umbrella organization of dues-paying computer user groups) on these topics. I have membership and/or access to all these groups, but I can't keep up with every corner of the discussions. 
I hope I am not violating my own suggestion to keep replies brief and to the point.
-- Bob Primak 

    On Monday, October 30, 2023 at 04:23:21 PM EDT, John Rudy via LCTG <lctg at lists.toku.us> wrote:  
 
 People shouldn’t pay such a high price for calling out AI harms<!--#yiv3363674862 filtered {}#yiv3363674862 filtered {}#yiv3363674862 filtered {}#yiv3363674862 p.yiv3363674862MsoNormal, #yiv3363674862 li.yiv3363674862MsoNormal, #yiv3363674862 div.yiv3363674862MsoNormal {margin:0in;line-height:normal;font-size:11.0pt;font-family:"Calibri", sans-serif;color:windowtext;}#yiv3363674862 h1 {margin:0in;text-align:center;line-height:150%;font-size:22.5pt;font-family:"Helvetica", sans-serif;color:#333333;letter-spacing:-.75pt;font-weight:normal;}#yiv3363674862 h4 {margin:0in;line-height:150%;font-size:13.5pt;font-family:"Helvetica", sans-serif;color:#909090;font-weight:normal;}#yiv3363674862 a:link, #yiv3363674862 span.yiv3363674862MsoHyperlink {color:#222222;font-weight:bold;text-decoration:underline;}#yiv3363674862 span.yiv3363674862Heading1Char {font-family:"Calibri Light", sans-serif;color:#2F5496;}#yiv3363674862 span.yiv3363674862Heading4Char {font-family:"Calibri Light", sans-serif;color:#2F5496;font-style:italic;}#yiv3363674862 span.yiv3363674862mc-toc-title {}#yiv3363674862 span.yiv3363674862EmailStyle45 {font-family:"Arial", sans-serif;font-variant:normal !important;color:windowtext;text-transform:none;text-decoration:none none;vertical-align:baseline;}#yiv3363674862 .yiv3363674862MsoChpDefault {font-size:10.0pt;}#yiv3363674862 filtered {}#yiv3363674862 div.yiv3363674862WordSection1 {}-->
A very interesting article

John

  

From: The Algorithm from MIT Technology Review <newsletters at technologyreview.com> 
Sent: Monday, October 30, 2023 2:12 PM
To: john.rudy at alum.mit.edu
Subject: People shouldn’t pay such a high price for calling out AI harms

  

| 
| 
| 
| 
| 
| 
| 

Subscribe for $8/month
 |

 |

 |

 |

 |

 |
| 
| 
| 
|  |


  

| 
| 
| 

The Algorithm

By Melissa Heikkilä • 10.30.23
 
 |

 |

 |


  

| 
| 
| 
Welcome back to The Algorithm! 

This week everyone is talking about AI. The White House just unveiled a new executive order that aims to promote safe, secure, and trustworthy AI systems. It’s the most far-reaching bit of AI regulation the US has produced yet, and my colleague Tate Ryan-Mosley and I have highlighted three things you need to know about it. Read them here. 

The G7 has just agreed a (voluntary) code of conduct that AI companies should abide by, as governments seek to minimize the harms and risks created by AI systems. And later this week, the UK will be full of AI movers and shakers attending the government’s AI Safety Summit, an effort to come up with global rules on AI safety. 

In all, these events suggest that the narrative pushed by Silicon Valley about the “existential risk” posed by AI seems to be increasingly dominant in public discourse.
 |

 |

 |


  

| 
| 

 |

 |


  

| 
| 
| 
This is concerning, because focusing on fixing hypothetical harms that may emerge in the future takes attention from the very real harms AI is causing today. “Existing AI systems that cause demonstrated harms are more dangerous than hypothetical ‘sentient’ AI systems because they are real,” writes Joy Buolamwini, a renowned AI researcher and activist, in her new memoir Unmasking AI: My Mission to Protect What Is Human in a World of Machines. Read more of her thoughts in an excerpt from her book, out tomorrow.  

I had the pleasure of talking with Buolamwini about her life story and what concerns her in AI today. Buolamwini is an influential voice in the field. Her research on bias in facial recognition systems made companies such as IBM, Google, and Microsoft change their systems and back away from selling their technology to law enforcement. 

Now, Buolamwini has a new target in sight. She is calling for a radical rethink of how AI systems are built, starting with more ethical, consensual data collection practices. “What concerns me is we’re giving so many companies a free pass, or we’re applauding the innovation while turning our head [away from the harms],” Buolamwini told me. Read my interview with her. 

While Buolamwini’s story is in many ways an inspirational tale, it is also a warning. Buolamwini has been calling out AI harms for the better part of a decade, and she has done some impressive things to bring the topic to the public consciousness. What really struck me was the toll speaking up has taken on her. In the book, she describes having to check herself into the emergency room for severe exhaustion after trying to do too many things at once—pursuing advocacy, founding her nonprofit organization the Algorithmic Justice League, attending congressional hearings, and writing her PhD dissertation at MIT. 

She is not alone. Buolamwini’s experience tracks with a piece I wrote almost exactly a year ago about how responsible AI has a burnout problem.  

Partly thanks to researchers like Buolamwini, tech companies face more public scrutiny over their AI systems. Companies realized they needed responsible AI teams to ensure that their products are developed in a way that mitigates any potential harm. These teams evaluate how our lives, societies, and political systems are affected by the way these systems are designed, developed, and deployed. 

But people who point out problems caused by AI systems often face aggressive criticism online, as well as pushback from their employers. Buolamwini described having to fend off public attacks on her research from one of the most powerful technology companies in the world: Amazon. 

When Buolamwini was first starting out, she had to convince people that AI was worth worrying about. Now, people are more aware that AI systems can be biased and harmful. That’s the good news. 

The bad news is that speaking up against powerful technology companies still carries risks. That is a shame. The voices trying to shift the Overton window on what kinds of risks are being discussed and regulated are growing louder than ever and have captured the attention of lawmakers, such as the UK’s prime minister, Rishi Sunak. If the culture around AI actively silences other voices, that comes at a price to us all.  
 |

 |

 |


  

| 
|  |

 |


  

| 
| 
| 
Deeper Learning
 |

 |

 |


  

| 
| 
| 
Ilya Sutskever, OpenAI’s chief scientist, on his hopes and fears for the future of AI

Instead of building the hottest new AI models, Sutskever tells Will Douglas Heaven in an exclusive interview, his new priority is to figure out how to stop an artificial superintelligence (a hypothetical future technology he sees coming with the certainty of a true believer) from going rogue.

It gets wilder: Sutskever says he thinks ChatGPT just might be conscious (if you squint). He thinks the world needs to wake up to the true power of the technology his company and others are racing to create. And he thinks some humans will one day choose to merge with machines. Read the full interview here. 
 |

 |

 |


  

| 
|  |

 |


  

| 
| 
| 
Where does AI data come from? 
AI systems are notoriously not transparent. In an attempt to tackle this problem, MIT, Cohere for AI, and 11 other institutions have audited and traced nearly 2,000 of the most widely used fine-tuning data sets, which form the backbone of many published breakthroughs in natural-language processing. The end product is nerdy but cool. (The Data Provenance Initiative) 

AI will come for women first
Researchers from McKinsey argue that the jobs most at risk of being replaced by generative AI will be in customer service and sales—both professions that employ lots of women. (Foreign Policy) 

What the UN’s AI advisory group is up to
The United Nations has been eager to step up and take a more active role in overseeing AI globally. To that end, it has amassed  a team of AI experts from both industry and academia tasked with coming up with recommendations that will shape what a potential UN agency for AI governance will look like. This is a nice explainer. (Time)

AI is slowly reenergizing San Francisco
High housing costs, crime rates, and poverty have plagued the people of San Francisco for years. But now a new crop of buzzy AI startups are starting to draw money, people, and “vibes” back into the city. (The Washington Post $)

Margaret Atwood is not impressed with AI literature
The author, who published a searing review of a story written by a large language model, makes a strong case for why published authors don’t need to worry about AI. (The Walrus)  
 |

 |

 |


  

| 
|  |

 |


  

| 
| 

 |

 |


  

| 
| 
| 
For a limited time, save 25% on an annual subscription to MIT Technology Review and gain access to our Hard Problems edition, which explores technology’s role in solving the world’s most pressing issues.
 |

 |

 |


  

| 
| 
SUBSCRIBE FOR $8/MONTH 
 |

 |


  

| 
|  |

 |


  

| 
| 

 |

 |


  

| 
| 
| 
Register now. Two days of innovation and inspiration on the MIT campus. November 14-15, 2023.
 |

 |

 |


  

| 
|  |

 |


  

| 
| 
| 
Top image credit: DREW ANGERER/GETTY IMAGES
 |

 |

 |


  

| 
|  |

 |


  

| 
| 
| 
Was this newsletter forwarded to you, and you’d like to see more?

Sign up today → 
 |

 |

 |

 |

 |

 |
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 
| 

 |

 |

 |

 | 
| 
| 
| 

 |

 |

 |

 | 
| 
| 
| 

 |

 |

 |

 |

 |

 |

 |

 |


  

| 
|  |

 |


  

| 
| 
| 
View in browser | This email was sent to john.rudy at alum.mit.edu.

Manage your preferences | Unsubscribe | Terms of Service | Privacy Policy 

MIT Technology Review · 196 Broadway, 3rd fl, · Cambridge, MA 02139 · USA

Copyright © 2023 MIT Technology Review, All rights reserved.

Opt out of all promotional emails and newsletters from MIT Technology Review 
 |

 |

 |

 |

 |

 |

 |



===============================================
::The Lexington Computer and Technology Group Mailing List::
Reply goes to sender only; Reply All to send to list.
Send to the list: LCTG at lists.toku.us      Message archives: http://lists.toku.us/pipermail/lctg-toku.us/
To subscribe: email lctg-subscribe at toku.us  To unsubscribe: email lctg-unsubscribe at toku.us
Future and Past meeting information: http://LCTG.toku.us
List information: http://lists.toku.us/listinfo.cgi/lctg-toku.us
This message was sent to bobprimak at yahoo.com.
Set your list options: http://lists.toku.us/options.cgi/lctg-toku.us/bobprimak@yahoo.com
  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.toku.us/pipermail/lctg-toku.us/attachments/20231031/689ee2d7/attachment.htm>


More information about the LCTG mailing list