[Lex Computer & Tech Group/LCTG] What if companies could read your mind? Neurotechnology is coming, and your cognitive liberty is at stake. - The Boston Globe

Robert Primak bobprimak at yahoo.com
Mon Mar 20 19:27:36 PDT 2023


 I won't try to argue politics at length with Dick Miller. He sees what he sees, reads what he reads, and has his own blog about what he learns. And he has a right to his opinions. 
What I will do is to point out that after stripping away all the politics from Dick's reply, I have to conclude he seriously misunderstands what AI is and what AI can and cannot do.
There has never been a plausible demonstration of any AI thinking original thoughts at all. Nothing any AI to date has done has in any way gone beyond what human programmers trained the AI to do. There are no truly novel results in any AI project to date which have been publicly displayed. Even AI "works of art" have their roots in artistic (and sometimes scientific) plagiarism. Think of the uproar over music "sampling" which erupted in the 1980's and apply that to the visual arts. 
I am aware that some AI insiders think they are seeing much more originality coming from AI than has been objectively proven to be the case.
As for "double candidates", I was living in DuPage County, IL the year Lyndon Larouche, with no assist from social media or the Internet, set up a full slate of "democrat" candidates in the Democrat Primary, totally embarrassing the real IL Democrat Party, and totally subverting State politics for the following twelve to sixteen years. This practice of subverting politics way predates AI as we know it today.
Trump, Inc. did not use AI. And they were not "used by" any foreign powers. (Trump did use a series of interviews with NBC News correspondent Katy Tur to make himself look and sound implausible as a winning candidate.) They used the anonymity and contagion of crowds, principles demonstrated by Karl Marx and Friedrich Engels in the 1840's, and by Lebon re. the French Revolution and the Fronde before that. Trump, Inc. were in full control of their actions in doing these things.  
This is what the "echo chamber" effect in social media is at its core. Bots actually have little to do with contagion or emboldening people by allowing them anonymity. The effect can happen perfectly well with no bots and no AI influence. No "magic algorithms" are needed to induce this type of behavior in anonymous crowds. In my lifetime, the same things happened during the antiwar rallies of the 1960s and very recently in the BLM protests during the COVID pandemic. The sociological dialectic is as old as Civilization. Even the ancient Israelites are said by some to have practiced it. 
And if you want to read up on Populism (Trumpism), look at Theodore Roosevelt, and his Bull Moose movement within the Republican Party early in the 20th Century. All the same Xenophobic, Identity Politics and anti-Labor elements were present in that movement, long before the Internet and its bots. Fun fact: at the time, there were more Klansmen in Indiana than in Mississippi. (Source: https://www.gilderlehrman.org/history-resources/teacher-resources/statistics-immigration-america-ku-klux-klan-membership-1915 ) 
All of which is irrelevant to the question at hand: Is there really any such thing as truly autonomous AI? The answer is definitively "No". If the Tesla self-driving cars are any indication, the day of autonomous AI is very far off indeed.
But yes, if we could define what truly autonomous AI might look like, we should put the guardrails around it before the machines rise up and take over the world.
I'm afraid we're creating an "extended topic" here. Sorry about doing that on the main mailing list. Feel free to move this discussion into Extended Topics if people don't want to receive these chains of lengthy postings on this topic. 
-- Bob Primak 
    On Monday, March 20, 2023 at 06:25:37 PM EDT, Dick Miller <themillers at millermicro.com> wrote:  
 
  Hi, Bob and All:
 
 Bob, I agree with much of what you say. But...
 
The fact is, no tech now available or on the horizon can decode human thoughts, let alone change them.
 Decode? I expect it will get to that - if nuclear war and/or climate disruption don't stop it. I expect its early damage will be by analyzing and modifying bulk human thoughts - brain-washing - like opinion polls, and what entities (from politicians to car-dealers) do to shift their direction. Perfect fodder and readily available to most any Large-Language Model (LLM). Expect more efficient neuromarketing. Ugh!
 
 
The human brain does not handle memory the way computers do it.
 True, but irrelevant. These LLMs handle memory differently than traditional computers do. And, they can analyze external information that works differently, as surely as humans can study other animals that think differently, etc.
 
  
 Like ChatGPT and other very limited AI, this new tech is being vastly overhyped. If you want to know whether you can trust what an "expert" says publicly, look at what they are trying to sell now. This expert is selling a book; others are selling half-baked or raw tech toys. Meta is selling its version of immersive VR hardware and services.  
  Frankly at this point, I am totally not impressed. And totally not afraid of this new tech. This is not science so far; these are just the newest expensive toys and entertainment services. Any other claims would be fraudulent at this point.   
 
 I think you are saying that current or upcoming LLMs cannot read (or write!) human minds. I disagree; even Donald Trump (with help from Russian Political Technology and such), has done that - on a broad scale and already to great harm. It's richly documented online, so those LLMs are learning all about it and examining it from some very new standpoints. The potential for that sort of damage - and many other sorts of damage, including new ones even the Sci-Fi writers haven't posited - seems quite likely. It may "just happen" via an AI lab's Internet connection, but nations eagerly invest in developing more of these damaging abilities - for all the reasons that drive them to sponsor a nuclear arms race.
 
 Nobody knows which directions, or even how many of them, this new technology may take. But it's damned serious, even if it has opened as "the newest expensive toys and entertainment services". Better to worry now, than after we learn why.
 
 Recommended recent reading (and they have links to more):
 
 The Unpredictable Abilities Emerging From Large AI Models (Quanta, March 16, 2023)
 Large-language AI models (LLMs) like ChatGPT are now big enough that they’ve started to display startling, unpredictable behaviors.
 
 OpenAI checked to see whether GPT-4 could take over the world. (Ars Technica, March 15, 2023)
 While the concern over AI "x-risk" is hardly new, the emergence of powerful large-language models (LLMs) such as ChatGPT and Bing Chat - the latter of which appeared very misaligned but Microsoft launched it anyway - has given the AI alignment community a new sense of urgency. They want to mitigate potential AI harms, fearing that much more powerful AI, possibly with superhuman intelligence, may be just around the corner.
 With these fears present in the AI community, OpenAI granted the group Alignment Research Center (ARC) early access to multiple versions of the GPT-4 model to conduct some tests. Specifically, ARC evaluated GPT-4's ability to make high-level plans, set up copies of itself, acquire resources, hide itself on a server, and conduct phishing attacks.
 
 Neuromarketing and the Battle for Your Brain (Wired, March 14, 2023)
 You experience subtle and overt manipulation on the web every day, but that doesn't mean you can't think and act for yourself. It's critical that we understand what others can and can't do to change our minds, as neurotechnology enables newfound ways to track and hack the human brain.
 [It's as old as politics and religion, and as new as Russia's and China's manipulation of a recent US president and his manipulation of his MAGA followers.]
 
 Heather Cox Richardson: Since Reagan, the GOP has adopted Russian Political Technology - and Trump is mis-using it again. (Letters from an American, March 19, 2023)
 Rumors that he is about to be indicted in New York in connection with the $130,000 hush-money payment to adult film star Stormy Daniels have prompted former president Donald Trump to pepper his alternative social media site with requests for money and to double down on the idea that any attack on him is an attack on the United States.
 The picture of America in his posts reflects the extreme version of the virtual reality the Republicans have created since the 1980s. This old Republican narrative created a false image of the nation and of its politics, an image pushed to a generation of Americans by right-wing media, a vision that MAGA Republicans have now absorbed as part of their identity. It reflects a manipulation of politics that Russian political theorists called "political technology." Russian "political technologists" developed a series of techniques to pervert democracy by creating a virtual political reality through modern media. They blackmailed opponents, abused state power to help favored candidates, sponsored “double” candidates with names similar to those of opponents in order to split their voters and thus open the way for their own candidates, created false parties to create opposition, and, finally, created a false narrative around an election or other event that enabled them to control public debate. Essentially, they perverted democracy, turning it from the concept of voters choosing their leaders into the concept of voters rubber-stamping the leaders they had been manipulated into backing. The GOP has been using this Russian strategy and significant Russian help to apply the same dirty tricks in our USA.
 
 Sadly,
 Dick Miller <TheMillers at millermicro.com>
     
|    |  Co-Leader, FOSS User Group in Natick (NatickFOSS.org)  |

 -- 
   | A. Richard & Jill A. Miller            | MILLER MICROCOMPUTER SERVICES |
 | Mailto:TheMillers at millermicro.com      | 61 Lake Shore Road            |
 | Web:  http://www.millermicro.com/       | Natick, MA 01760-2099, USA    |
 | Voice: 508/653-6136, 9AM-9PM -0400(EDT)| NMEA N 42.29993°, W 71.36558° |
 
 
   On 3/20/23 12:46, Robert Primak wrote:
  
 
 I also was able to read the article after dismissing the popup and clicking the read the article button.  
  A lot of the content of the article is highly speculative, given the primitive state of brain research right now. The fact is, no tech now available or on the horizon can decode human thoughts, let alone change them. Memories are not understood well enough to know whether selectively erasing one or some of them is even possible. The human brain does not handle memory the way computers do it. And storage in the human brain is not a literal recording of perceived stimuli in exact chronological order at set locations. 
  So I am not at all worried about someone forcing me to have my memory retained or erased. And it will be a long, long time if ever before any police department or court of law can interrogate anyone's thoughts or intentions directly.  
  Making laws without knowing what the tech will look like is way premature at this time. And any discussion of this topic belongs in the category of Science Fiction at this time. Though, a general statement of a doctrine of the inalienable human right to freedom of thought should be under consideration right now. That debate is long overdue. 
  
  Like ChatGPT and other very limited AI,this new tech is being vastly overhyped. If you want to know whether you can trust what an "expert" says publicly, look at what they are trying to sell now. This expert is selling a book; others are selling half-baked or raw tech toys. Meta is selling its version of immersive VR hardware and services.  
  Frankly at this point, I am totally not impressed. And totally not afraid of this new tech. This is not science so far; these are just the newest expensive toys and entertainment services. Any other claims would be fraudulent at this point.   
  -- Bob Primak  
  
      On Sunday, March 19, 2023 at 09:10:38 PM EDT, Drew King (dking65 at kingconsulting.us) <dking65 at kingconsulting.us> wrote:  
     Hmm,
 
 You are using a galaxy tab s8+
 I'm using a Galaxy tab s7+
 
 I did get a pop-up window with an opportunity to subscribe. Only in the upper left-hand corner was a close button, and then I was able to read the whole article.. I think that the Boston globe will limit your reading ability to a few articles per month..
 
 
 Drew  
 
   On March 19, 2023 8:46:08 PM EDT, David Lees <joeoptics at gmail.com> wrote: 
  Peter, You might want to summarize, because I don't think people without a paid Globe subscription can read it.
 
 David Lees 
 Tab S8+  
  On Sun, Mar 19, 2023, 8:31 PM <palbin24 at yahoo.com> wrote:
  

 https://www.bostonglobe.com/2023/03/14/opinion/if-algorithms-can-read-our-minds-can-we-preserve-freedom-thought/
 
 
 Peter
===============================================
 ::The Lexington Computer and Technology Group Mailing List::
 Reply goes to sender only; Reply All to send to list.
 Send to the list: LCTG at lists.toku.us      Message archives: http://lists.toku.us/pipermail/lctg-toku.us/
 To subscribe: email lctg-subscribe at toku.us  To unsubscribe: email lctg-unsubscribe at toku.us
 Future and Past meeting information: http://LCTG.toku.us
 List information: http://lists.toku.us/listinfo.cgi/lctg-toku.us
 This message was sent to joeoptics at gmail.com.
 Set your list options: http://lists.toku.us/options.cgi/lctg-toku.us/joeoptics@gmail.com
 
    
    -- 
 Sent from my Android device with K-9 Mail.    ===============================================
 ::The Lexington Computer and Technology Group Mailing List::
 Reply goes to sender only; Reply All to send to list.
 Send to the list: LCTG at lists.toku.us      Message archives: http://lists.toku.us/pipermail/lctg-toku.us/
 To subscribe: email lctg-subscribe at toku.us  To unsubscribe: email lctg-unsubscribe at toku.us
 Future and Past meeting information: http://LCTG.toku.us
 List information: http://lists.toku.us/listinfo.cgi/lctg-toku.us
 This message was sent to bobprimak at yahoo.com.
 Set your list options: http://lists.toku.us/options.cgi/lctg-toku.us/bobprimak@yahoo.com
     
  ===============================================
::The Lexington Computer and Technology Group Mailing List::
Reply goes to sender only; Reply All to send to list.
Send to the list: LCTG at lists.toku.us      Message archives: http://lists.toku.us/pipermail/lctg-toku.us/
To subscribe: email lctg-subscribe at toku.us  To unsubscribe: email lctg-unsubscribe at toku.us
Future and Past meeting information: http://LCTG.toku.us
List information: http://lists.toku.us/listinfo.cgi/lctg-toku.us
This message was sent to themillers at millermicro.com.
Set your list options: http://lists.toku.us/options.cgi/lctg-toku.us/themillers@millermicro.com
 
   
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.toku.us/pipermail/lctg-toku.us/attachments/20230321/6ba7e6e0/attachment.htm>


More information about the LCTG mailing list