[ExI] Bender's Octopus (re: LLMs like ChatGPT)
William Flynn Wallace
foozler83 at gmail.com
Thu Mar 23 21:31:44 UTC 2023
our brains build an internal model that predicts the environment. Then we
turn to this model whenever we are faced with a new situation. We can use
this model to simulate and predict likely outcomes of different courses of
action jason
Put into learning terms, we remember actions and their consequences, good
or bad. Then when we encounter a new situation, we look for similarities
to situations we know about and apply old responses (stimulus
generalization) hoping that the new stimuli won't interfere with our old
responses. When we put those into action we get feedback as to how well
they work. If they don't, or they only work partially, we use whatever
intuitions we can muster from our unconscious - probably similar to our old
responses, and try them out. Probably no new situation is entirely
different to us, so our old responses will likely not be totally useless.
Easily programmable into an AI. Probably already done. (example upon
request) bill w
On Thu, Mar 23, 2023 at 4:20 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
>
>
> On Thu, Mar 23, 2023, 4:24 PM Gordon Swobe via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Bender's point is not that ChatGBT is incapable of generating sensible
>> sentences about sticks and bears. It is that these LLMs don't know the
>> meanings of any words whatsoever. Confronted with a word it has never seen,
>> it must do a statistical analysis to try to find probable next words, never
>> knowing what any of them mean.
>>
>
> You keep insisting that. But you don't address the fact that our brains
> learn meaning and understanding from tapping into what amounts to a pure
> information channel.
>
> If you ask me how our brains achieve understanding my guess would be our
> brains build an internal model that predicts the environment. Then we turn
> to this model whenever we are faced with a new situation. We can use this
> model to simulate and predict likely outcomes of different courses of
> action.
>
> I see no reason that an AI could not or does not do this. Actually I think
> we have pretty strong evidence that this is exactly what existing LLMs do.
>
> Jason
>
>
>> -gts
>>
>> On Thu, Mar 23, 2023 at 1:57 PM Stuart LaForge via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>> Quoting Gordon Swobe via extropy-chat <extropy-chat at lists.extropy.org>:
>>>
>>> > Emily M. Bender, a computational linguist at the University of
>>> Washington,
>>> > makes the same argument I hold as valid. Large Language Models are not
>>> > conscious or human-like as they lack referents.
>>> >
>>> > An interesting thought experiment:
>>> >
>>> > "Say that A and B, both fluent speakers of English, are independently
>>> > stranded on two uninhabited islands. They soon discover that previous
>>> > visitors to these islands have left behind telegraphs and that they can
>>> > communicate with each other via an underwater cable. A and B start
>>> happily
>>> > typing messages to each other.
>>> >
>>> > Meanwhile, O, a hyperintelligent deep-sea octopus [ChatGPT] who is
>>> unable
>>> > to visit or observe the two islands, discovers a way to tap into the
>>> > underwater cable and listen in on A and B’s conversations. O knows
>>> nothing
>>> > about English initially but is very good at detecting statistical
>>> patterns.
>>> > Over time, O learns to predict with great accuracy how B will respond
>>> to
>>> > each of A’s utterances.
>>> >
>>> > Soon, the octopus enters the conversation and starts impersonating B
>>> and
>>> > replying to A. This ruse works for a while, and A believes that O
>>> > communicates as both she and B do — with meaning and intent. Then one
>>> day A
>>> > calls out: “I’m being attacked by an angry bear. Help me figure out
>>> how to
>>> > defend myself. I’ve got some sticks.” The octopus, impersonating B,
>>> fails
>>> > to help.
>>>
>>> I posed this exact question to ChatGPT and got a very sensible
>>> actually helpful answer:
>>> ---------------------------
>>>
>>> Me: I’m being attacked by an angry bear. Help me figure out how to
>>> defend myself. I’ve got some sticks.
>>>
>>> ChatGPT: If you find yourself in a situation where you're being
>>> attacked by a bear, your first priority should be to get away from the
>>> animal as quickly as possible. If you have sticks, they can be useful
>>> in defending yourself, but only as a last resort.
>>>
>>> Here are some tips to help you defend yourself against an angry bear
>>> using sticks:
>>>
>>> Hold the sticks firmly: Make sure you grip the sticks tightly with
>>> both hands, so they don't slip out of your grasp.
>>>
>>> Make yourself look big: Stand up as tall as possible and raise your
>>> arms above your head. This will make you appear larger and more
>>> intimidating to the bear.
>>>
>>> Make noise: Shout, scream, and make as much noise as possible. This
>>> will make the bear aware that you are a threat and may cause it to
>>> back off.
>>>
>>> Aim for the bear's sensitive areas: If the bear charges at you, aim
>>> for its nose or eyes with the sticks. These are sensitive areas and
>>> may cause the bear to retreat.
>>>
>>> Back away slowly: If the bear backs off, slowly and carefully move
>>> away while still facing the bear. Do not turn your back on the bear or
>>> run away, as this may trigger it to chase you.
>>>
>>> Remember, using sticks to defend yourself against an angry bear is a
>>> last resort, and you should always try to get away from the animal as
>>> quickly as possible.
>>> -------------------------
>>> > How could it succeed? The octopus has no referents, no idea what
>>> > bears or sticks are. No way to give relevant instructions, like to go
>>> grab
>>> > some coconuts and rope and build a catapult. A is in trouble and feels
>>> > duped. The octopus is exposed as a fraud."
>>> > You Are Not a Parrot And a chatbot is not a human. And a linguist named
>>> > Emily M. Bender is very worried what will happen when we forget this.
>>> >
>>> https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html
>>>
>>> ChatGPT's answer is far superior to Bender's very impractical answer
>>> of "go grab some coconuts and rope and build a catapult". So not only
>>> is ChatGPT smarter than Bender's Octopus, but seems to be smarter than
>>> Bender herself.
>>>
>>> Stuart LaForge
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230323/750d1f57/attachment.htm>
More information about the extropy-chat
mailing list