[Lex Computer & Tech Group/LCTG] Prompt Engineering
John P. Rudy
jjrudy1 at comcast.net
Sat Aug 31 18:54:29 PDT 2024
About 20 years ago my son started a 100-inning baseball game with all profits going to ALS. They have raised nearly $1M
The game starts Saturday at 8am and goes, continuously until 100 innings are over, approx. 5pm on Sunday The game this year starts 10/12. Players sign up for time blocks during which they are on the batter schedule and 9 of the available players play the field when on defense. My son Brett has played all 100 innings for all the years of the contest. See https://theangelfund.org/100-innings/
I will be one of the sponsors. It is a worthy charity
John
From: LCTG <lctg-bounces+jjrudy1=comcast.net at lists.toku.us> On Behalf Of Ted Kochanski via LCTG
Sent: Saturday, August 31, 2024 6:11 PM
To: Robert Primak <bobprimak at yahoo.com>
Cc: lctg at lists.toku.us
Subject: Re: [Lex Computer & Tech Group/LCTG] Prompt Engineering
All,
This one is of personal interest
My father-in-law suffered from ALS for the last couple of years of his life [back about 40 years ago]
It was particularly hard on him since he was a Generalist Physician who had seem some ALS patients in his career
The positive feature of his illness was that his wife [my mother-in-law] was a nurse and could manage his care at home with just some assistance
As his illness progressed from initial problems walking to problems talking -- the two of them developed a private code involving him blinking his eyes to communicate -- based on binary logic chains for common issues
While he eventually lost the ability to even mouth words -- He never lost his mental capacities nor the ability to blink
They agreed that when he stopped breathing that there would be no resuscitation -- that's how it finally ended
I'm always hopeful that there will be some useful means of communicating using tech long before there is a "cure"
Ted
On Fri, Aug 30, 2024 at 1:56 PM Robert Primak via LCTG <lctg at lists.toku.us <mailto:lctg at lists.toku.us> > wrote:
These results are well within the realm of what I alluded to in previous messages. Notes:
(1) This LLM was trained on just one patient, which increases the likelihood of accuracy. It does not provide general applicability to different individuals. (At least not yet.)
(2) The signals used came from very specific locations within the subject's brain, and these locations were known to (and adjusted by) the researchers. For this type of application, such methods are perfectly acceptable. Notice that implanted electrodes are a much finer instrument than scanned EEG or f-MRI signals. Also, this research was not attempting to read "thoughts", but rather to interpret signals which would normally trigger muscle movements to produce the elements of human speech. Very different concepts!
(3) I am impressed with how rigorously the researchers have reported what their LLM was doing with the data it was processing.This makes these results entirely plausible.
(4) I do hope this research leads to more widely applicable LLM and AI models for use on more patients with limited or no speech capabilities. Including communication with people born without these faculties. This type of research will provide real benefits to individuals, caregivers, health care teams and families, as well to our society in general.
I hope this research and projects like it will receive the funding they need to move forward. This is a very promising area of research and medical device development. I do not see this particular kind of research as threatening to "read people's minds" or interfere with privacy or freedoms. That's not what this research is intended to do, and I don't see how it could lead to such outcomes.
-- Bob Primak
On Friday, August 30, 2024 at 01:22:35 PM EDT, <comahony at yahoo.com <mailto:comahony at yahoo.com> > wrote:
Related to this conversation, earlier today I read the following account of advancing the state of the art in another email…
A Lost Voice Regained
A man who lost the ability to speak four years ago is sounding like his earlier self, thanks to a collection of brain implants and machine learning models.
What’s new: Researchers built a system that <https://info.deeplearning.ai/e3t/Ctc/LX+113/cJhC404/VVsmGC4pHCpMW8G19Lw2f28pLW46ZLVZ5kht08MtGxm43qgyTW7lCdLW6lZ3npW5sGV9k3bZR2DN2V-pqL6NpqjW3gL-0Z3MqF9DW46KrSy23_yvhW4Zt_645d18f0W2BZ1lP8l6ggxW6HTgPc96Kw1rW4BP3zr6qcH_ZW6SyHP799_n7pW24-DQN60LkKGW3chlcS6gg9f8W4pvYtq7MF24zVdtdDP6-bcwRN7dfzXw4T4JdW8f_c9m1hbBymW279syz1145Z7W7tdxxS6k7xRLW7_Mc7c21j7-4W30qvCy6lYzQwN5TBwH7Whx9qN5JkqKvJK_zZW9jZP2l1Gb0nbW6v2RD17w-2sdW3sqY-l77N43ff5zPMYH04> decodes speech signals from the brain of a man who lost the ability to speak clearly due to amyotrophic lateral sclerosis, also known as ALS, and enables him to speak through a synthetic version of his former voice. At the start of the study, his efforts to speak were intelligible only to his personal caregiver. Now he converses regularly with family and friends, The New York Times <https://info.deeplearning.ai/e3t/Ctc/LX+113/cJhC404/VVsmGC4pHCpMW8G19Lw2f28pLW46ZLVZ5kht08MtGxmn3qgyTW7Y8-PT6lZ3kxW2M87d95Yb6wNN1LDVKdJS3bTVKbc3P8VzXXMMZ--HQwMdBDVt_QvH1qkyCQW91LzXp8GJb31W3wQ5wh7S0rz6W7kmh8T3308pGW2CVzzt5cFxgjF2Jpk9vczqsW8MhtCF1njm9JW8x6MGq26sylPN4DlhWgvFmH9N3vyv8kz11rKW5-rH-r4gYgHJW2c62Gj532yPcW1npm-41hd_qGW47f29t3q3WjwW6shSqB7fwS96W2K-Cf72CfR8sN1pTLt_p8WhLW2CtQ7z4xghkBMy5wZWqR0bfW3F9yGH74rsSGW1_lvjL5TXmGHW5G3v4k1vnsGJf5mZFwT04> reported. Nicholas Card built the system with colleagues University of California-Davis, Stanford University, Washington University, Brown University, VA Providence Healthcare, and Harvard Medical School.
How it works: The authors surgically implanted four electrode arrays into areas of the brain that are responsible for speech. The system learned to decode the patient’s brain signals, decide the most likely phonemes he intended to speak, determine the words those phonemes express, and display and speak the words aloud using a personalized speech synthesizer.
* After the patient recovered from the implantation surgery, the authors collected data for training and evaluating the system. They recorded his brain signals while he tried to speak during 84 sessions, each between 5 and 30 minutes, over 32 weeks. The sessions were split into two tasks: copying, in which the patient spoke sentences shown on a screen, and conversation, in which he spoke about whatever he wanted. Initial sessions focused on copying. Later, when the authors had accrued paired brain signals and known sentences, they focused on conversation.
* A <https://info.deeplearning.ai/e3t/Ctc/LX+113/cJhC404/VVsmGC4pHCpMW8G19Lw2f28pLW46ZLVZ5kht08MtGxlP3qgyTW6N1vHY6lZ3q2W7--0Kx3VD509W4ndtLc6B9xqyN1vwhTLPQ4sWW453yj-3q5cY4W4wf-G84cZMMhW4PC80C9kw_zTW4tSKJ655c7qmW70cTT19dBxvkW7X7w4t68gm-QW8bg9Qq3vR1tgW6QCtd959nRL_W9gJ0q_3HPCmvW4ncl5N1p7SGqW22KNL96svczCMXzN9D44byZMfSg7JzlqnvW2FJC0L1-_RyrW8JqrPw8wwlJYN44Djq9JSb8kW35Zqct18wc8gV6BwJz8PG38kW22WKGY3kg0kQf9dZ3CR04> gated recurrent unit (GRU) learned to translate brain signals into a sequence of phonemes. The authors trained the model after each session on all recordings made during that session. To adapt it to day-to-day changes in brain activity, they also fine-tuned it during later sessions: After they recorded a new sentence, they fine-tuned the GRU on a 60/40 mix of sentences from the current session and previous sessions.
* A weighted finite-state transducer (WFST), based on a pretrained 5-gram language model and described in the supplementary information <https://info.deeplearning.ai/e3t/Ctc/LX+113/cJhC404/VVsmGC4pHCpMW8G19Lw2f28pLW46ZLVZ5kht08MtGxm43qgyTW7lCdLW6lZ3pPW2KdCtd1cZnZBVZcp0y8wRx9GW3CfRZH4xtV2sVyD8dK1Qv2dlW7qVhQm63Q38_W6WFy3T4v_znnW20_-LZ6wYsz3M62QVtR0-GnVPrSd97hF3XWW3R2Yhb99mg5kW36G9Z27Vd9wrW3WSG055hCQ5WW8ZW87-6CSD5DW3tT0y21LzYcjW8RPtYj2TKcGrW35Lnqk2LMFNTW4nlfVG7gvkSmN12N86fSKHX4W3hfTZ06c9TX5W7QdNqy3ZGPx8N7tsTKcZqGN-W8k0Kg76HRfWLW7DR7sh9525ccW1qWYl-4gmPNCf204KK404> here), translated sequences of phonemes into sentences. Given a sequence, it generated the 100 most likely sentences.
* Given the likely sentences, the authors ranked them according to the probability that the GRU, WFST, and <https://info.deeplearning.ai/e3t/Ctc/LX+113/cJhC404/VVsmGC4pHCpMW8G19Lw2f28pLW46ZLVZ5kht08MtGxlP3qgyTW6N1vHY6lZ3pvW89nKCv1QQfdVW7Nh93q1TzRDmW8Fb_Fx8Y19L3W6q78Yd2WsrWPVLb6MH6jRfDqW7zQ5cJ1JgpMTW6LzfYL34NXg1N7QN7bQFRzC5W3D7qh04FP118Vt5fHY308kzqV_Glh21GSx5rW55swqg61_WhvW3mLvgV3LHLlBW6_8FRF69yFF9V3hnwT4xcmY1W5jQ-M818_4BFW1H22l_98xmfDW5Gdr7B2cT8XsN57f8rPDJ20hW1m6js-1tvPxbW3LQBpr7Yz3x-W48J7cr297Tz0f7F3T-804> OPT, a pretrained large language model, would generate them.
* A pretrained <https://info.deeplearning.ai/e3t/Ctc/LX+113/cJhC404/VVsmGC4pHCpMW8G19Lw2f28pLW46ZLVZ5kht08MtGxlP3qgyTW6N1vHY6lZ3n-W8xZNlK5yWGYTN1nhjBPVpBBdW8HL3Mv4CjWCZW6TQ9Jp5tSM32W3Q6FdL3-Fr4hW8bsTcF2h2hQjW256wgp79x0fyW2fGSBx8_HlBfW8f3p2H29MqBYW12fBqD8g-K1gVkBv9123992fW4ZWylB11JRclW8PS69N2GlxLjN4GsG81c4nYBW6NqQ9n2Bj-mbW8bZV-p7cjyXkDL0-S9B7qGW8p4rfp3jqLqZN27_--7jJHs_W2TTXyp8zC3ztTGnlt1L_t5MW5s2VW63fWBk-f2vP5Nq04> StyleTTS 2 text-to-speech model turned the highest-ranking sentence into speech. The authors fine-tuned the model on recordings of the patient’s voice from before the onset of his illness, such as podcasts.
Results: After two hours of recording the patient’s brain signals and training on that data, the system achieved 90.2 percent accuracy in the copying task. By the final session, the system achieved 97.5 percent accuracy and enabled the patient to speak on average 31.6 words per minute using a vocabulary of 125,000 words.
Behind the news: Previous work either had much <https://info.deeplearning.ai/e3t/Ctc/LX+113/cJhC404/VVsmGC4pHCpMW8G19Lw2f28pLW46ZLVZ5kht08MtGxmn3qgyTW7Y8-PT6lZ3mBN7YpgFTDKcCNW8wn50859qZwbW18NBhJ2k4m00TLFh72SSTK5W1fM0wH52q0kkW5-PSwz70k-_JW10_qm-3sSQtDW7mSW7N5fJ5tLW1NSZyb8gZbFWW5gRF0G9lbZ3WMN_nJQXvn3hN8yjzmz5WJt8VgrhBY2wNNn7W1SRrG-7DJF6MW6lfYM05fcVnWW1yvF3S4X5lnVN4KyNCDLLHzPW8bM1Qb54t_H7W7yBgMS1g8qr-W3QhVGp7q5pT6W4tw6535mN772N49jryhTN-jzW6Kw0tv50kXp9W3b5CDq6ssQ1vW32ndyJ2qmb--W2XNDg94gPlhBf3thnQ604> lower accuracy or generated a <https://info.deeplearning.ai/e3t/Ctc/LX+113/cJhC404/VVsmGC4pHCpMW8G19Lw2f28pLW46ZLVZ5kht08MtGxmn3qgyTW7Y8-PT6lZ3lZW4x5BXj4kn76fW9118F_9b6BRdW5MSfwd9dr0LzW5WGlkS2YYHGPW9hSlWx7szMvrW2fnl_z8409QvN1KvQYzCmnCKW3qNZzm3Vnht8W6tKpcj8YpCC8W6dp-6855RryJW1l3Kd45KpdkFW5ffVfm1PNnMxN5KXXtVrCSQrW81JMq_3KS-D8W3LdZF91YCyycW814GxB71SxTPW95zFFS6zVpNkW8YD3V47dVTByW8sq6Ns71SPtyN80QKtLnCWFHN5yhLQfQf-1LW6J02mX4VPzQ5W4C6lC-3ztT3QW3CFD0w6KWb1GW2Z2FnQ9l6_QCW3vXq394nJzfCf3RzGM-04> limited vocabulary. The new work improved upon a 2023 <https://info.deeplearning.ai/e3t/Ctc/LX+113/cJhC404/VVsmGC4pHCpMW8G19Lw2f28pLW46ZLVZ5kht08MtGxm43qgyTW7lCdLW6lZ3lmW6bFChs11QMFbW9hWf2f6lF9gDW18q9-C7y3B0nW1n-Tqn4Y1ZGcW77DMz42DC6mxW8my3B78-bqq7W2Z6dZX5dVwgZW6kMh4j3379gQW1nr0_L185lbbW7Gk0wv1Y0ZWsW8t48sV117xptN69hP9YX1P_TW3g0r4J4jNXKqW3s7kdD48J9B_W6wXqQ784LyQHW5X8NJT76vySmN53Gytpdq6vqW4K4J_y8BG5GdW6hMYBQ5B54y-W3lDz3b4WCVQ0W5M317-7npSSXW7N-Rn36VMJ3jW1Cl7fH1YTtTlV4M8724THXjYdkGV_z04> study that enabled ALS patients to speak with 76.2 percent accuracy using a vocabulary of equal size.
Why it matters: Relative to the 2023 study on which this one was based, the authors changed the positions of the electrodes in the brain and continued to update the GRU throughout the recording/training sessions. It’s unclear which changes contributed most to the improved outcome. As language models improve, new models potentially could act as drop-in replacements for the models in the authors’ system, further improving accuracy. Likewise, improvements in speech-to-text systems could increase the similarity between the synthetic voice and the patient’s former voice.
-Conor
On Aug 28, 2024, at 1:03 AM, Robert Primak via LCTG <lctg at lists.toku.us <mailto:lctg at lists.toku.us> > wrote:
So the question in my mind is whether this research, to the extent discussed, is valid
Yes, that is the question. And this question awaits someone else trying to confirm the results. But with both the programming and the training methods cloaked in "black box" secrecy, there are important considerations which no one will ever be allowed to investigate. Anyway, that's the way this sort of research is usually handled.
-- Bob Primak
On Tuesday, August 27, 2024 at 06:42:40 PM EDT, John P. Rudy <jjrudy1 at comcast.net <mailto:jjrudy1 at comcast.net> > wrote:
But if there is a specific disabled person and something directly related to that person works to 50% that would be fantastic. So the question in my mind is whether this research, to the extent discussed, is valid
From: Robert Primak <bobprimak at yahoo.com <mailto:bobprimak at yahoo.com> >
Sent: Tuesday, August 27, 2024 6:29 PM
To: 'Harry Forsdick' <harry at forsdick.com <mailto:harry at forsdick.com> >; John P. Rudy <jjrudy1 at comcast.net <mailto:jjrudy1 at comcast.net> >
Cc: lctg at lists.toku.us <mailto:lctg at lists.toku.us>
Subject: Re: [Lex Computer & Tech Group/LCTG] Prompt Engineering
to the extent that the activity of the brain can be “read”
Therein lies the problem with the scans which the research in the video used. These scans are not detailed enough to reveal specific thoughts. Probes are being developed which have much finer resolutions, so maybe there are signals specific enough to get realistic results. But the research being reported in the video makes claims beyond the scope of what could possibly have been measured. This leaves me very skeptical.
Let’s see what comes out of research labs that is reproducible
Yes, let's do that. The research reported in the video is said by the researchers to require extensive training on specific individuals, and is only reproducible for those individuals, with a success rate of not better than 65% (per Blumenstein-- Tang hedges on his success rates). That to me is not very impressive for general use applications.
-- Bob Primak
On Tuesday, August 27, 2024 at 11:44:25 AM EDT, John P. Rudy <jjrudy1 at comcast.net <mailto:jjrudy1 at comcast.net> > wrote:
Regardless of where the field is today, it seems logical that to the extent that the activity of the brain can be “read” that eventually it will be understood. Presumably this will occur slowly but eventually happen. Who would have believed that “thoughts” can cause an artificial limb to move?
Whether AI can somehow aid in the understanding of a large amount of brain data is beyond my understanding, but I’d opt for information from trusted sources, not, say, facebook Let’s see what comes out of research labs that is reproducible
John
From: LCTG <lctg-bounces+jjrudy1=comcast.net at lists.toku.us <mailto:lctg-bounces+jjrudy1=comcast.net at lists.toku.us> > On Behalf Of Harry Forsdick via LCTG
Sent: Tuesday, August 27, 2024 9:40 AM
To: Robert Primak <bobprimak at yahoo.com <mailto:bobprimak at yahoo.com> >
Cc: lctg at lists.toku.us
Subject: Re: [Lex Computer & Tech Group/LCTG] Prompt Engineering
Bob,
You are right: I agree with your concern with equating Large Language Models (LLMs) with Artificial Intelligence (AI). Doing so succumbs to the hype of Wall Street.
Unfortunately to the uninformed, introducing this TLA into the conversation makes the impressive results of LLMs even more mysterious and just for techies.
-- Harry
On Mon, Aug 26, 2024 at 1:58 PM Robert Primak <bobprimak at yahoo.com <mailto:bobprimak at yahoo.com> > wrote:
Very good finds, Harry.
Within the context of reading people's thoughts, a lot of what prompt engineering can and cannot do depends on the size of the data set it is trained on, and the variety of test subjects from whom the training data is collected. Too small a sample size in either regard, and there is a real chance of researcher bias (conscious or unconscious) entering into the picture and affecting the output from the LLM.
I do not equate LLMs with AI, so I restrict my terms to what we are talking about -- LLMS only, not more general AI.
-- Bob Primak
On Monday, August 26, 2024 at 11:35:42 AM EDT, Harry Forsdick via LCTG <lctg at lists.toku.us <mailto:lctg at lists.toku.us> > wrote:
Folks,
Remember when Conor O'Mahony gave his three excellent talks about AI / ML systems? [see https://docs.google.com/document/d/1fj8FcfD_e-NEi20O3AL_8fg45DdB136fvyCNl7V2gaM/edit?usp=sharing]
One of the topics he talked about was "prompt engineering".
I interpreted what was said about prompt engineering to refer to the kinds of things we all did with plain old Google to get it to return the kind of results we were after.
I recently asked https://perplexity.ai <https://perplexity.ai> the question "What does "prompt engineering" mean in the context of AI search systems?". I learned that my interpretation was only partially correct. Instead of being just things an end-user does, prompt engineering is also something that a developer of an AI question/answering system must do to take the query submitted by end-users, and add additional controls to get the AI answer engine to return a reasonable, readable answer.
One of the references in this response, https://mitsloanedtech.mit.edu/ai/basics/effective-prompts/, discusses what an end user should learn to do. It basically says that this is like learning how to ask questions of an expert who knows a lot about the subject of your query: you can steer the answer to the one you are after if you just ask the question correctly. To quote from the above reference:
Prompts are your input into the AI system to obtain specific results. In other words, prompts are conversation starters: what and how you tell something to the AI for it to respond in a way that generates useful responses for you. After that, you can build a continuing prompt, and the AI will produce another response accordingly. It’s like having a conversation with another person, only in this case the conversation is text-based, and your interlocutor is AI.
Below is the answer I got back from Perplexity. If you want to dig deeper than the Perplexity answer, there are references to source material used in the response.
Regards,
-- Harry
What does "prompt engineering" mean in the context of AI search systems?
Prompt engineering is a crucial process in the context of AI search systems, particularly for large language models (LLMs) and generative AI tools. It involves crafting well-structured and effective input queries or instructions to guide AI models in producing desired outputs or responses[1].
Definition and Purpose
Prompt engineering refers to the art and science of designing and optimizing prompts to elicit specific behaviors from AI models[4]. Its primary goal is to bridge the gap between human intention and machine understanding, enabling AI systems to generate accurate, relevant, and contextually appropriate responses[2].
Components of Effective Prompts
1. Instruction: Prompts often require specific and clear requests to receive the desired output[3].
2. Context: Providing relevant background information helps the AI model understand the task better[3].
3. Input Data: This includes any necessary information for the AI to process and generate a response[3].
4. Output Format: Specifying the desired format of the response can improve the accuracy and relevance of the AI's output[3].
Role of Prompt Engineers
AI prompt engineers serve as intermediaries between machine learning models and human users[3]. Their responsibilities include:
1. Developing sets of inputs to train models for optimal outputs
2. Writing text-based prompts for various tasks (e.g., essay writing, blog post generation)
3. Evaluating AI systems for idiosyncrasies
4. Training and fine-tuning emerging AI tools
What does "prompt engineering" mean in the context of AI search systems?
Prompt engineering plays a vital role in AI search systems by:
1. Improving Accuracy: Well-crafted prompts lead to more precise and relevant responses from AI models[4].
2. Enhancing User Experience: Clear and concise prompts make it easier for users to interact effectively with AI models[4].
3. Mitigating Biases: Careful prompt design helps minimize the risk of generating inappropriate or biased content[4].
4. Ensuring Consistency: Proper prompting techniques enable more predictable and reliable AI responses[4].
Skills Required for Prompt Engineering
To excel in prompt engineering, professionals should possess:
1. Understanding of AI, ML, and Natural Language Processing (NLP)
2. Programming skills, particularly in Python
3. Strong communication skills
4. Knowledge of cognitive psychology and linguistics
5. Experience with pre-trained AI models like GPT-3 or GPT-4[3]
In conclusion, prompt engineering is a critical aspect of AI search systems, focusing on optimizing the interaction between humans and AI models to produce high-quality, relevant, and accurate responses.
Citations:
[1] https://en.wikipedia.org/wiki/Prompt_engineering
[2] https://www.datastax.com/guides/what-is-prompt-engineering
[3] https://www.techtarget.com/searchenterpriseai/definition/AI-prompt-engineer
[4] https://cloud.google.com/discover/what-is-prompt-engineering
[5] https://www.ibm.com/topics/prompt-engineering
[6] https://www.akooda.co/blog/what-is-prompt-engineering
[7] https://researchguides.mvc.edu/ai/prompts
[8] https://mitsloanedtech.mit.edu/ai/basics/effective-prompts/
<http://www.forsdick.com/resume/> Harry Forsdick
<http://lexingtonphotoscan.com/> Lexington Photo Scanning
<http://lexingtontmma.org/> Town Meeting Member Precinct 7
<https://mail.google.com/mail/?view=cm&fs=1&tf=1&to=harry@forsdick.com> harry at forsdick.com
<http://www.forsdick.com/> www.forsdick.com
<https://goo.gl/xZXT2F> 46 Burlington St.
Lexington, MA 02420
<callto:17817996002> (781) 799-6002 (mobile)
<http://forsdick.weebly.com/home/my-websites> Click to see my websites
Lafayette Returns
200th Anniversary Reenactment
----> September 2, 2024 (Labor Day) 1pm-2pm
Lexington Battle Green
<https://forsdick.com/0MyImages/LafayetteReturns.png>
Procession, Lafayette Reenactor, Minute Men, School Children, Bell Ringing,
Period Costumes, 🔥 Cannon Fire 🔥, and ... cookies shaped like Lafayette
===============================================
::The Lexington Computer and Technology Group Mailing List::
Reply goes to sender only; Reply All to send to list.
Send to the list: LCTG at lists.toku.us <mailto:LCTG at lists.toku.us> Message archives: http://lists.toku.us/pipermail/lctg-toku.us/
To subscribe: email lctg-subscribe at toku.us <mailto:lctg-subscribe at toku.us> To unsubscribe: email lctg-unsubscribe at toku.us <mailto:lctg-unsubscribe at toku.us>
Future and Past meeting information: http://LCTG.toku.us
List information: http://lists.toku.us/listinfo.cgi/lctg-toku.us
This message was sent to bobprimak at yahoo.com. <mailto:bobprimak at yahoo.com.>
Set your list options: http://lists.toku.us/options.cgi/lctg-toku.us/bobprimak@yahoo.com
===============================================
::The Lexington Computer and Technology Group Mailing List::
Reply goes to sender only; Reply All to send to list.
Send to the list: LCTG at lists.toku.us <mailto:LCTG at lists.toku.us> Message archives: http://lists.toku.us/pipermail/lctg-toku.us/
To subscribe: email lctg-subscribe at toku.us <mailto:lctg-subscribe at toku.us> To unsubscribe: email lctg-unsubscribe at toku.us <mailto:lctg-unsubscribe at toku.us>
Future and Past meeting information: http://LCTG.toku.us
List information: http://lists.toku.us/listinfo.cgi/lctg-toku.us
This message was sent to comahony at yahoo.com <mailto:comahony at yahoo.com> .
Set your list options: http://lists.toku.us/options.cgi/lctg-toku.us/comahony@yahoo.com
===============================================
::The Lexington Computer and Technology Group Mailing List::
Reply goes to sender only; Reply All to send to list.
Send to the list: LCTG at lists.toku.us <mailto:LCTG at lists.toku.us> Message archives: http://lists.toku.us/pipermail/lctg-toku.us/
To subscribe: email lctg-subscribe at toku.us <mailto:lctg-subscribe at toku.us> To unsubscribe: email lctg-unsubscribe at toku.us <mailto:lctg-unsubscribe at toku.us>
Future and Past meeting information: http://LCTG.toku.us
List information: http://lists.toku.us/listinfo.cgi/lctg-toku.us
This message was sent to tedpkphd at gmail.com <mailto:tedpkphd at gmail.com> .
Set your list options: http://lists.toku.us/options.cgi/lctg-toku.us/tedpkphd@gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.toku.us/pipermail/lctg-toku.us/attachments/20240831/071c844c/attachment.htm>
More information about the LCTG
mailing list