[Lex Computer & Tech Group/LCTG] Your talk today

Harry Forsdick forsdick at gmail.com
Wed Dec 14 11:01:45 PST 2022


Barry,

First, I think your talk today resulted in one of the better engagements
and discussions we have experienced at LCTG meetings.  My kudos to you for
speaking in a way that showed that a very complex topic explained in
"ordinary" language, can be easier to understand than the language used to
describe many of our topics.  Bravo!


The period when I was a grad student at MIT (1971-74) was the time just
after Terry Winograd got his PhD degree from MIT entitled "Procedures as a
Representation for Data in a Computer Program for Understanding Natural
Language (1971)".  This was probably the first (or one of the first)
Natural Language Systems.  To quote Wikipedia,

[Winograd] wrote SHRDLU as a PhD thesis at MIT in the years from 1968–70.
In making the program Winograd was concerned with the problem of providing
a computer with sufficient "understanding" to be able to use natural
language. Winograd built a blocks world, restricting the program's
intellectual world to a simulated "world of toy blocks". The program could
accept commands such as, "Find a block which is taller than the one you are
holding and put it into the box" and carry out the requested action using a
simulated block-moving arm. The program could also respond verbally, for
example, "I do not know which block you mean." The SHRDLU program can be
viewed historically as one of the classic examples of how difficult it is
for a programmer to build up a computer's semantic memory by hand and how
limited or "brittle" such programs are. [more information about SHRDLU
<http://hci.stanford.edu/~winograd/shrdlu/index.html>]


In some respects Winograd's thesis approached natural language
understanding the same style you used -- with the exception that the world
Winograd tried to understand was *very small* compared with the world the
material you presented today tries to understand.

A lot of people at MIT and other places (such as BBN where I worked) built
off Winograd's work over the next several decades with some limited
success.  I think part of the problem in making more progress at BBN was
the additional requirement of understanding speech as the front end to the
written language used in Winograd's world.  This, of course was influenced
by the sponsors of the work, the US Military, who wanted to have work on
Natural Language Understanding apply more directly to their problems:

   1. Understanding speech intercepts captured by the signals intelligence
   community.
   2. Understanding speech commands coming from humans in complex
   environments (such as a pilot of a fighter jet).

This caused focus on building deployable systems that could make sense out
of both Speech and the Natural Language conveyed by speech.

There are two possible goals of activities going on in the Natural Language
Understanding world:

   1. Explaining how we (humans) understand language
   2. Building a device (on computers) to understand language

It seems to me that the set of ideas you spoke about (semantic modeling)
satisfies the first case while the techniques occupying the news these days
(machine learning) have been more successful in satisfying the second case.

I think a significant contribution would be to explain how machine learning
devices work in terms of semantic modeling -- or analyze why machine
learning devices will have inherent roadblocks in overcoming their current
limitations.  On the xBBN mailing list I read the distaste people from the
semantic modeling community have for the glitzy capabilities of machine
learning devices and systems.  The impression I get is that semantic
modeling people feel machine learning devices are cherry picking the easy
cases of Natural Language Understanding.  I don't know what the Machine
Learning crowd thinks of their Semantic Modeling predecessors, although I
do remember the distaste people like Marvin Minsky had for struggling
Machine Learning researchers.  The guys who got the Turing Award for their
work in resurrecting Neural Networks deserve credit for fighting through
the resistance of the AI heavyweights that ruled during the end of the
milenna.

Curiously enough, this is the same accusation I used to hear about
the results of Terry Winograd's work.  His system for understanding the
language of the simplified Blocks World using semantic modeling stripped so
much of the problem away that it hit the complexity wall of larger world
domains.

What do you think of these ideas?

Regards,

-- Harry


Harry Forsdick <http://www.forsdick.com/resume/>
Lexington Photo Scanning <http://lexingtonphotoscan.com/>
Town Meeting Member Precinct 7 <http://lexingtontmma.org/>
harry at forsdick.com
www.forsdick.com
  46 Burlington St.
Lexington, MA 02420 <https://goo.gl/xZXT2F>
(781) 799-6002 (mobile) <callto:17817996002>
meet.jit.si/HarryForsdick (video)
Click <http://forsdick.weebly.com/home/my-websites> to see my other websites
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.toku.us/pipermail/lctg-toku.us/attachments/20221214/0ead5812/attachment.htm>


More information about the LCTG mailing list