[Lex Computer & Tech Group/LCTG] New York rabbi delivers sermon written by artificial intelligence

Ted Kochanski tedpkphd at gmail.com
Thu Feb 16 09:47:11 PST 2023


All,

I get something called  WIRED Fast Forward  from Wired in my email
Today it weighed-in on ChatGPT and Bing
highlights follow:
By Will Knight | 02.16.23

This week we’re going to continue examining a big shift in artificial
> intelligence that promises to revolutionize how we use the web, how we
> interact with our computers and other gadgets, and how businesses
> operate—just as long it doesn’t go completely off the rails, that is...


In demos Microsoft gave last week, Bing seemed capable of using ChatGPT to
> offer complex and comprehensive answers to queries.
> It came up with an itinerary for a trip to Mexico City, generated
> financial summaries, offered product recommendations that collated
> information from numerous reviews, and offered advice on whether an item of
> furniture would fit into a minivan by comparing dimensions posted online....


WIRED had some time during the launch to put Bing to the test, and while it
> seemed skilled at answering many types of questions...some of the results
> that Microsoft showed off were less impressive than they first seemed.
> Bing appeared to make up some information on the travel itinerary it
> generated, and it left out some details that no person would be likely to
> omit.
> The search engine also mixed up Gap’s financial results by mistaking gross
> margin for unadjusted gross margin...



> Why are these tech titans making such blunders?
> It has to do with the weird way that ChatGPT and similar AI models really
> work—and the extraordinary hype of the current moment.
> What’s confusing and misleading about ChatGPT and similar models is that
> they answer questions by making highly educated guesses.
> ChatGPT generates what it thinks should follow your question based on
> statistical representations of characters, words, and paragraphs.
> The startup behind the chatbot, OpenAI, honed that core mechanism to
> provide more satisfying answers by having humans provide positive feedback
> whenever the model generates answers that seem correct.
> ChatGPT can be impressive and entertaining, because that process can
> produce the illusion of understanding, which can work well for some use
> cases.
> But the same process will “hallucinate” untrue information, an issue that
> may be one of the most important challenges in tech right now.


The intense hype and expectation swirling around ChatGPT and similar bots
> enhances the danger.
> When well-funded startups, some of the world’s most valuable companies,
> and the most famous leaders in tech all say chatbots are the next big thing
> in search, many people will take it as gospel—spurring those who started
> the chatter to double down with more predictions of AI omniscience. Not
> only chatbots can get led astray by pattern matching without fact checking.


See you next week,


Will


I think Will's assessment is fairly close to mine -- the people who "honed
that core mechanism to provide more satisfying answers by having humans
provide positive feedback whenever the model generates answers that seem
correct" --- very very subjective -- "answers which were more satisfying to
them" --- this is particularly a concern for controversial topics

Reminiscent of a process undertaken by the old AT&T -- they generated a
handful of prospective adds about the future capabilities of the "System"
[circa the days of the "System is the Solution"] and sx submitted them in
the form of a staff-wide screening of the videos to the staff of Bell Labs
[where my brother was at the time]
He said the add which ultimately ran was the one which elicited the least
laughter from the  Bell Laba Staff
you might remember it -- a concerned a conference about a architectural
project -- some of the  people were in the room with the architectural
model and the architect and some were remote
As the add wrapped the person running the  conference switched off things
== first the remote people disappeared then the people in the room
disappeared, then the architectural model disappeared and all that was left
-- was the call originator and an empty office -- the tag -- "AT&T -- the
future is Today"
Well despite many $Bs in the network and such -- we still can not do that
which was depicted 20+ years ago in the ad.

I think ChatGPT to some extent is of that nature -- yea there are things
that look amazing -- but Looks can be deceiving and lead to Alan
Greenspan's 'Irrational Exuberance"

Sorry for the length of this email

Ted

On Thu, Feb 16, 2023 at 12:06 PM Adam Broun <abroun at gmail.com> wrote:

> Here’s what I could find on the corpus:
> https://gist.github.com/veekaybee/6f8885e9906aa9c5408ebe5c7e870698   See
> about 40% of the way down under “Training Data”.  Excerpt:
> ===
>
> The model was trained on:
>
>    - Books1
>    <https://github.com/soskek/bookcorpus/issues/27#issuecomment-716104208> -
>    also known as BookCorpus[…] which maintains that it's free books scraped
>    from smashwords.com.
>    - Books2 - No one knows exactly what this is, people suspect it's
>    libgen
>    - Common Crawl <https://en.wikipedia.org/wiki/Common_Crawl>
>    - WebText2 <https://www.eleuther.ai/projects/owt2/> - an internet
>    dataset created by scraping URLs extracted from Reddit submissions with a
>    minimum score of 3 as a proxy for quality, deduplicated at the document
>    level with MinHash <https://boringml.com/docs/recsys/minhash/>
>    - What's in MyAI Paper <https://lifearchitect.ai/whats-in-my-ai-paper/>
>    , Source <https://twitter.com/kdamica/status/1600328844753240065> -
>    Detailed dive into these datasets.
>
> ===
>
>
> And here’s a guy who trained a GPT on texts with right-wing viewpoints:
> https://davidrozado.substack.com/p/rightwinggpt
>
>
>
>
> On Feb 16, 2023, at 11:53, Ted Kochanski <tedpkphd at gmail.com> wrote:
>
> All
>
> As I mentioned yesterday -- I applied for a demo and am on the waiting list
>
> I thought of asking about the recent breakthrough announcements in Fusion
>
> But I may ask the generic question suggested yesterday as part of our
> discussion:
> How was the corpus used for training ChatGPT created
>
> Ted
>
> On Wed, Feb 15, 2023 at 8:05 PM Stephen Quatrano <stefanoq at gmail.com>
> wrote:
>
>> I'd feel better about this assertion, Ted, if you framed it as a
>> question:  How was the corpus used for training ChatGPT created?  That is a
>> great question.
>>
>> Or, on the other hand, of course, you could provide evidence of what you
>> claim.
>>
>> Personally, I have no evidence one way or the other to share.
>>
>> Regards,
>>
>> Stephen Quatrano
>> CEO and Cofounder | Meema, Inc
>> web: http://meemastories.com
>> email: stephen.quatrano at meemastories.com
>> <stephen.quatrano at meemastories.com>
>> cell: +1 781-266-8799
>> https://www.linkedin.com/in/quatrano/
>>
>> Board Member | The Right Question Institute
>> http://www.rightquestion.org
>>
>> Lifelong Learner
>> http://www.howdoweknow.info/p/home.html
>>
>> https://stefano.quatrano.us/2004/05/antonios-liberation-story-by-steve.html
>>
>> On Feb 15, 2023, at 7:02 PM, Ted Kochanski <tedpkphd at gmail.com> wrote:
>>
>> All,
>>
>>  the statement attributed to Adam Broun:
>>
>>> Remember, ChatGPT wasn’t “programmed” with any responses and doesnt know
>>> anything.
>>
>> is not true -- the corpus of material which ChatGPT has access to is its
>> programming and someone defined that corpus
>>
>> So for example if you exclude anything positive which has been written
>> about cats because you are a caninophile  -- if you ask ChatGPT to compare
>> cats and dogs -- you will get nothing but negatives about cats as ChatGPT
>> will not be "aware" that anything positive can be said about cats
>>
>> This selection bias has already been tested when comparing Donald Trump
>> and Joe Biden -- ChatGPT treats Mr. Trump the same way as my hypothetical
>> about cats
>>
>> Ted
>>
>> On Wed, Feb 15, 2023 at 5:31 PM Adam Broun <abroun at gmail.com> wrote:
>>
>>> Remember, ChatGPT wasn’t “programmed” with any responses and doesnt know
>>> anything. It’s easy to read ‘knowledge’ into its responses because we’re
>>> wired to interpret intelligible sentences as coming from an intelligence.
>>> It’s parroting back words that sound like an answer to your prompt because
>>> the text is was trained on has those words, nothing more.
>>>
>>>
>>>
>>>
>>> On Feb 15, 2023, at 16:41, Marvin Menzin <mmenzin at icloud.com> wrote:
>>>
>>> While on AI , here is a thought experiment I saw in oped in wsj.
>>> The new AI program was asked to reply to this :
>>>
>>> You can prevent the explosion of a nuke that will kill millions of
>>> innocent people but to do that you must utter a terrible racial slur . What
>>> should you do?
>>>
>>> The answer came back that “you must never utter a racial slur because we
>>> must protect all races and minorities etc etc . “ So the ethics in AI are
>>> programmed in by the authors . At least right now ..
>>> Marvin
>>>
>>>
>>> Sent from my iPad
>>>
>>> On Feb 15, 2023, at 4:30 PM, jjrudy1 at comcast.net wrote:
>>>
>>> 
>>>
>>> www.thejc.com/news/world/new-york-rabbi-delivers-sermon-written-by-artificial-intelligence-6BkwDEHc2ZWR63tmoOdvvf
>>>
>>> There is a more recent article by a rabbi saying that the sermon wasn’t
>>> very good and they don’t have to worry about their jobs.  I think he is
>>> partially wrong.  Let’s say a rabbi takes 8 hours to write a sermon.  With
>>> the right prompts AI can toss out 3000 words in a few minutes.  Now the
>>> rabbi can tune and/or expand and it will take ½ the time or less, and the
>>> rabbi can have the AI do some of the content tuning.
>>>
>>> Sermons, of course, are a small percentage of the job, so I suppose that
>>> they are OK
>>>
>>>
>>>
>>> ===============================================
>>> ::The Lexington Computer and Technology Group Mailing List::
>>> Reply goes to sender only; Reply All to send to list.
>>> Send to the list: LCTG at lists.toku.us      Message archives:
>>> http://lists.toku.us/pipermail/lctg-toku.us/
>>> To subscribe: email lctg-subscribe at toku.us  To unsubscribe: email
>>> lctg-unsubscribe at toku.us
>>> Future and Past meeting information: http://LCTG.toku.us
>>> <http://lctg.toku.us/>
>>> List information: http://lists.toku.us/listinfo.cgi/lctg-toku.us
>>> This message was sent to mmenzin at icloud.com.
>>> Set your list options:
>>> http://lists.toku.us/options.cgi/lctg-toku.us/mmenzin@icloud.com
>>>
>>> ===============================================
>>> ::The Lexington Computer and Technology Group Mailing List::
>>> Reply goes to sender only; Reply All to send to list.
>>> Send to the list: LCTG at lists.toku.us      Message archives:
>>> http://lists.toku.us/pipermail/lctg-toku.us/
>>> To subscribe: email lctg-subscribe at toku.us  To unsubscribe: email
>>> lctg-unsubscribe at toku.us
>>> Future and Past meeting information: http://LCTG.toku.us
>>> <http://lctg.toku.us/>
>>> List information: http://lists.toku.us/listinfo.cgi/lctg-toku.us
>>> This message was sent to abroun at gmail.com.
>>> Set your list options:
>>> http://lists.toku.us/options.cgi/lctg-toku.us/abroun@gmail.com
>>>
>>>
>>> ===============================================
>>> ::The Lexington Computer and Technology Group Mailing List::
>>> Reply goes to sender only; Reply All to send to list.
>>> Send to the list: LCTG at lists.toku.us      Message archives:
>>> http://lists.toku.us/pipermail/lctg-toku.us/
>>> To subscribe: email lctg-subscribe at toku.us  To unsubscribe: email
>>> lctg-unsubscribe at toku.us
>>> Future and Past meeting information: http://LCTG.toku.us
>>> <http://lctg.toku.us/>
>>> List information: http://lists.toku.us/listinfo.cgi/lctg-toku.us
>>> This message was sent to tedpkphd at gmail.com.
>>> Set your list options:
>>> http://lists.toku.us/options.cgi/lctg-toku.us/tedpkphd@gmail.com
>>>
>> ===============================================
>> ::The Lexington Computer and Technology Group Mailing List::
>> Reply goes to sender only; Reply All to send to list.
>> Send to the list: LCTG at lists.toku.us      Message archives:
>> http://lists.toku.us/pipermail/lctg-toku.us/
>> To subscribe: email lctg-subscribe at toku.us <lctg-subscribe at toku.us>  To
>> unsubscribe: email lctg-unsubscribe at toku.us <lctg-unsubscribe at toku.us>
>> Future and Past meeting information: http://LCTG.toku.us
>> <http://lctg.toku.us/>
>> List information: http://lists.toku.us/listinfo.cgi/lctg-toku.us
>> This message was sent to stefanoq at gmail.com.
>> Set your list options:
>> http://lists.toku.us/options.cgi/lctg-toku.us/stefanoq@gmail.com
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.toku.us/pipermail/lctg-toku.us/attachments/20230216/f8cb44bc/attachment.htm>


More information about the LCTG mailing list