[Lex Computer & Tech Group/LCTG] What if companies could read your mind? Neurotechnology is coming, and your cognitive liberty is at stake. - The Boston Globe

Dick Miller TheMillers at millermicro.com
Mon Mar 20 15:25:27 PDT 2023


Hi, Bob and All:

Bob, I agree with much of what you say. But...
> The fact is, no tech now available or on the horizon can decode human 
> thoughts, let alone change them.
Decode? I expect it /will/ get to that - if nuclear war and/or climate 
disruption don't stop it. I expect its early damage will be by 
*analyzing and modifying **/bulk/**human thoughts* - */brain-washing/* - 
like opinion polls, and what entities (from politicians to car-dealers) 
do to shift their direction. Perfect fodder and readily available to 
most any Large-Language Model (LLM). Expect more efficient 
*/neuromarketing/*. Ugh!

> The human brain does not handle memory the way computers do it.
True, but irrelevant. These LLMs handle memory differently than 
traditional computers do. And, *they /can/ analyze external information 
that works differently*, as surely as humans can study other animals 
that think differently, etc.

> Like ChatGPT and other very limited AI, this new tech is being vastly 
> overhyped. If you want to know whether you can trust what an "expert" 
> says publicly, look at what they are trying to sell now. This expert 
> is selling a book; others are selling half-baked or raw tech toys. 
> Meta is selling its version of immersive VR hardware and services.
>
> Frankly at this point, *I am totally not impressed. And totally not 
> afraid of this new tech.* This is not science so far; these are just 
> the newest expensive toys and entertainment services. *Any other 
> claims would be fraudulent at this point.*

I think you are saying that current or upcoming LLMs cannot read (or 
write!) human minds. I disagree; even Donald Trump (with help from 
*Russian **/Political Technology/* and such), has done that - on a broad 
scale and already to great harm. It's richly documented online, so those 
LLMs are learning all about it and examining it from some very new 
standpoints. *The potential for that sort of damage* - and many other 
sorts of damage, including new ones even the Sci-Fi writers haven't 
posited - *seems quite likely*. It may "just happen" via an AI lab's 
Internet connection, but nations eagerly invest in developing more of 
these damaging abilities - for all the reasons that drive them to 
sponsor a nuclear arms race.

*Nobody* knows which directions, or even how many of them, this new 
technology may take. But it's damned serious, even if it /has/ opened as 
"the newest expensive toys and entertainment services". *Better to worry 
now, than **/after/**we learn why.*

*Recommended recent reading* (and they have links to more):

*The Unpredictable Abilities Emerging From Large AI Models* (Quanta, 
March 16, 2023)
Large-language AI models (LLMs) like /ChatGPT/ are now big enough that 
they’ve started to display startling, unpredictable behaviors.

*OpenAI checked to see whether /GPT-4/ could take over the world. 
<https://arstechnica.com/information-technology/2023/03/openai-checked-to-see-whether-gpt-4-could-take-over-the-world/>* 
(Ars Technica, March 15, 2023)
While the concern over AI "x-risk" is hardly new, the emergence of 
powerful large-language models (LLMs) such as /ChatGPT/ and /Bing Chat/ 
- the latter of which appeared very misaligned but Microsoft launched it 
anyway - *has given the AI alignment community a new sense of urgency.* 
They want to mitigate potential AI harms, fearing that much more 
powerful AI, possibly with superhuman intelligence, may be just around 
the corner.
With these fears present in the AI community, OpenAI granted the group 
Alignment Research Center (ARC) early access to multiple versions of the 
/GPT-4/ model to conduct some tests. Specifically, *ARC evaluated 
**/GPT-4/**'s ability to make high-level plans, set up copies of itself, 
acquire resources, hide itself on a server, and conduct phishing attacks.*

*Neuromarketing and the Battle for Your Brain 
<https://www.wired.com/story/neuromarketing-philosophy-ethics/>* (Wired, 
March 14, 2023)
You experience subtle and overt manipulation on the web every day, but 
that doesn't mean you can't think and act for yourself. It's critical 
that we understand what others can and can't do to change our minds, as 
*neurotechnology enables newfound ways to track and hack the human brain.*
[It's as old as politics and religion, and as new as Russia's and 
China's manipulation of a recent US president and his manipulation of 
his MAGA followers.]

*Heather Cox Richardson: Since Reagan, the GOP has adopted Russian 
/Political Technology/ - and Trump is mis-using it again.* 
<https://heathercoxrichardson.substack.com/p/march-18-2023> (Letters 
from an American, March 19, 2023)
Rumors that he is about to be indicted in New York in connection with 
the $130,000 hush-money payment to adult film star Stormy Daniels have 
prompted former president Donald Trump to pepper his alternative social 
media site with requests for money and to double down on the idea that 
any attack on him is an attack on the United States.
The picture of America in his posts reflects the extreme version of the 
virtual reality the Republicans have created since the 1980s. This old 
Republican narrative created a false image of the nation and of its 
politics, an image pushed to a generation of Americans by right-wing 
media, a vision that MAGA Republicans have now absorbed as part of their 
identity. It reflects a manipulation of politics that Russian political 
theorists called "political technology." *Russian "political 
technologists" developed a series of techniques to pervert democracy by 
creating a virtual political reality through modern media. They 
blackmailed opponents, abused state power to help favored candidates, 
sponsored “double” candidates with names similar to those of opponents 
in order to split their voters and thus open the way for their own 
candidates, created false parties to create opposition, and, finally, 
created a false narrative around an election or other event that enabled 
them to control public debate.* Essentially, *they perverted democracy, 
turning it from the concept of voters choosing their leaders into the 
concept of voters rubber-stamping the leaders they had been manipulated 
into backing. The GOP has been using this Russian strategy and 
significant Russian help to apply the same dirty tricks in our USA.*

Sadly,
Dick Miller <TheMillers at millermicro.com>
	Co-Leader, FOSS User Group in Natick (NatickFOSS.org) 
<http://millermicro.com/FOSSUserGroupNatick.html>

-- 
*| A. Richard & Jill A. Miller            | MILLER MICROCOMPUTER SERVICES |
| Mailto:TheMillers at millermicro.com      | 61 Lake Shore Road            |
| Web: http://www.millermicro.com/       | Natick, MA 01760-2099, USA    |
| Voice: 508/653-6136, 9AM-9PM -0400(EDT)| NMEA N 42.29993°, W 71.36558° |

*
On 3/20/23 12:46, Robert Primak wrote:
> I also was able to read the article after dismissing the popup and 
> clicking the read the article button.
>
> A lot of the content of the article is highly speculative, given the 
> primitive state of brain research right now. The fact is, no tech now 
> available or on the horizon can decode human thoughts, let alone 
> change them. Memories are not understood well enough to know whether 
> selectively erasing one or some of them is even possible. The human 
> brain does not handle memory the way computers do it. And storage in 
> the human brain is not a literal recording of perceived stimuli in 
> exact chronological order at set locations.
>
> So I am not at all worried about someone forcing me to have my memory 
> retained or erased. And it will be a long, long time if ever before 
> any police department or court of law can interrogate anyone's 
> thoughts or intentions directly.
>
> Making laws without knowing what the tech will look like is way 
> premature at this time. And any discussion of this topic belongs in 
> the category of Science Fiction at this time. Though, a general 
> statement of a doctrine of the inalienable human right to freedom of 
> thought should be under consideration right now. That debate is long 
> overdue.
>
> Like ChatGPT and other very limited AI,this new tech is being vastly 
> overhyped. If you want to know whether you can trust what an "expert" 
> says publicly, look at what they are trying to sell now. This expert 
> is selling a book; others are selling half-baked or raw tech toys. 
> Meta is selling its version of immersive VR hardware and services.
>
> Frankly at this point, I am totally not impressed. And totally not 
> afraid of this new tech. This is not science so far; these are just 
> the newest expensive toys and entertainment services. Any other claims 
> would be fraudulent at this point.
>
> -- Bob Primak
>
>
> On Sunday, March 19, 2023 at 09:10:38 PM EDT, Drew King 
> (dking65 at kingconsulting.us) <dking65 at kingconsulting.us> wrote:
>
> Hmm,
>
> You are using a galaxy tab s8+
> I'm using a Galaxy tab s7+
>
> I did get a pop-up window with an opportunity to subscribe. Only in 
> the upper left-hand corner was a close button, and then I was able to 
> read the whole article.. I think that the Boston globe will limit your 
> reading ability to a few articles per month..
>
>
> Drew
>
>
> On March 19, 2023 8:46:08 PM EDT, David Lees <joeoptics at gmail.com> wrote:
>
>     Peter,
>     You might want to summarize, because I don't think people without
>     a paid Globe subscription can read it.
>
>     David Lees
>     Tab S8+
>
>     On Sun, Mar 19, 2023, 8:31 PM <palbin24 at yahoo.com> wrote:
>
>
>         https://www.bostonglobe.com/2023/03/14/opinion/if-algorithms-can-read-our-minds-can-we-preserve-freedom-thought/
>
>
>         Peter
>         ===============================================
>         ::The Lexington Computer and Technology Group Mailing List::
>         Reply goes to sender only; Reply All to send to list.
>         Send to the list: LCTG at lists.toku.us     Message archives:
>         http://lists.toku.us/pipermail/lctg-toku.us/
>         To subscribe: email lctg-subscribe at toku.us To unsubscribe:
>         email lctg-unsubscribe at toku.us
>         Future and Past meeting information: http://LCTG.toku.us
>         <http://LCTG.toku.us>
>         List information: http://lists.toku.us/listinfo.cgi/lctg-toku.us
>         This message was sent to joeoptics at gmail.com.
>         Set your list options:
>         http://lists.toku.us/options.cgi/lctg-toku.us/joeoptics@gmail.com
>
> -- 
> Sent from my Android device with K-9 Mail.
> ===============================================
> ::The Lexington Computer and Technology Group Mailing List::
> Reply goes to sender only; Reply All to send to list.
> Send to the list: LCTG at lists.toku.us   Message archives: 
> http://lists.toku.us/pipermail/lctg-toku.us/
> To subscribe: email lctg-subscribe at toku.us To unsubscribe: email 
> lctg-unsubscribe at toku.us
> Future and Past meeting information: http://LCTG.toku.us 
> <http://LCTG.toku.us>
> List information: http://lists.toku.us/listinfo.cgi/lctg-toku.us
> This message was sent to bobprimak at yahoo.com.
> Set your list options: 
> http://lists.toku.us/options.cgi/lctg-toku.us/bobprimak@yahoo.com
>
> ===============================================
> ::The Lexington Computer and Technology Group Mailing List::
> Reply goes to sender only; Reply All to send to list.
> Send to the list:LCTG at lists.toku.us       Message archives:http://lists.toku.us/pipermail/lctg-toku.us/
> To subscribe: emaillctg-subscribe at toku.us   To unsubscribe: emaillctg-unsubscribe at toku.us
> Future and Past meeting information:http://LCTG.toku.us
> List information:http://lists.toku.us/listinfo.cgi/lctg-toku.us
> This message was sent tothemillers at millermicro.com.
> Set your list options:http://lists.toku.us/options.cgi/lctg-toku.us/themillers@millermicro.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.toku.us/pipermail/lctg-toku.us/attachments/20230320/7abe9d48/attachment.htm>


More information about the LCTG mailing list