[ExI] Bender's Octopus (re: LLMs like ChatGPT)

Will Steinberg steinberg.will at gmail.com
Sat Mar 25 13:19:49 UTC 2023


The Chinese Room argument is garbage because a magic book with the answers
to every question isn't real, and if it was, it would already be a mind.

I find that often thought experiments with shoddy bounds fail hard.   The
bound here is the beginning of the experiment, where the situation is
already magically in front of us.   Where did the book come from? How was
it created?

Of course it's easy to write out the words for a thought experiment when
you invent an object, central to the experiment but of course not the
subject of it, that magically does exactly what you need it to do in order
to make the experiment.  A thought experiment could still have this book in
it but it should be the center of the experiment

On Fri, Mar 24, 2023, 5:49 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
> On Fri, Mar 24, 2023, 12:14 AM Gordon Swobe <gordon.swobe at gmail.com>
> wrote:
>
>>
>>
>> On Thu, Mar 23, 2023 at 9:37 PM Jason Resch via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>> There's no way to read this paper: https://arxiv.org/pdf/2303.12712.pdf
>>> and come away with the impression that GPT-4 has no idea what it is talking
>>> about.
>>>
>>
>> Hmm, nothing in the abstract even remotely suggests to me that GPT-4 will
>> know word meanings any more than does GPT-3. Eventually AI on digital
>> computers will far surpass human intelligence, but even then these
>> computers will be manipulating the forms of words and not their meanings.
>>
>
> It seems to me that you have accepted Searle's arguments. I believe I can
> knock down his Chinese room argument. If that is what you are using to base
> your decision on you should know almost no philosophers or computer
> scientists believe his argument holds water. Here's just one of the many
> flaws in the argument: there's more than one mind in the room. Ask the room
> about its favorite food, or about its experiences as a child. The answers
> given will not be Searle's. Change Searle for someone else, the room will
> respond the same way. Searle is an interchangeable cog in the machine. Yet
> Searle wants us to believe only his opinion matters. In truth, his position
> is no different than the "laws of physics" which "mindlessly" computes our
> evolving brain state "without any understanding" of what goes on in our
> heads. Searle's Chinese room argument works as any great magic trick does:
> through misdirection. Ignore the claims made by the man in the room who is
> shouting and waving his arms. Since we've established there are two minds
> in the room, we can replace Searle with a mindless demon and there still
> will be one mind left.
>
>
>
>> Do you believe, like my friend who fell in love with a chatbot, that a
>> software application can have genuine feelings of love for you?
>>
>
> I think we should defer such a debate until such time we can confidently
> define what a "genuine feeling" is and how to implement one.
>
> Jason
>
>> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230325/58dfd7cf/attachment-0001.htm>


More information about the extropy-chat mailing list