[ExI] Emily M. Bender — Language Models and Linguistics (video interview)

Gordon Swobe gordon.swobe at gmail.com
Sun Mar 26 22:49:43 UTC 2023


On Sun, Mar 26, 2023 at 1:38 AM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Jason Resch wrote:
>
> *3. She was asked what a machine would have to do to convince her they
> have understanding. Her example was that if Siri or Alexa were asked to do
> something in the real world, like turn on the lights, and if it does that,
> then it has understanding (by virtue of having done something in the real
> world).*
>
>
> Wait a minute. So she thinks that smart home systems have understanding of
> what they're doing, but LLMs don't? I wonder how many Siris and Alexas are
> the voice interface for smart home systems? A lot, I expect.
>
> If she's right (which she's not, seems to be the consensus here), then all
> that needs to be done is link up a LLM to some smart home hardware, and
> 'ta-daaaa', instant understanding!
>
> I don't buy it
>

She called Alexa-like understanding a "kind of" understanding as in "sorta
kinda," i.e., she brackets the word or puts it in scare-quotes. In so much
as Alexa executes your command to turn off the lights, there is a sense in
which it kind of "understands" your command. She is not referring to
anything like conscious understanding of the meanings of words.

I also put the word in scare-quotes when I say with a straight face that my
pocket calculator "understands" mathematics. I would not say it understands
mathematics in the conventional common sense of the word.

-gts
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230326/08b4e753/attachment.htm>


More information about the extropy-chat mailing list