[ExI] Bender's Octopus (re: LLMs like ChatGPT)
Giovanni Santostasi
gsantostasi at gmail.com
Sat Mar 25 21:13:24 UTC 2023
This view is so queer that metaphysicians have invented all sorts of
theories designed to substitute something less incredible.”
-- Bertrand Russell in “An Outline of Philosophy
<https://archive.org/details/in.ernet.dli.2015.222952/page/n157/mode/2up?q=%22We+suppose+that+a+physical+process+starts+from+a+visible+object%22>”
(1927)
Jason,
yes great quotes on the topic and very consistent among scientific great
minds over the centuries. It is clear to a scientific mind what this is all
about but the metaphysicians still like to make it confusing because their
position in the end is basically a religious one (the main goal is to show
humans are special and made in God's image, whatever that means).
On Sat, Mar 25, 2023 at 1:37 PM Giovanni Santostasi <gsantostasi at gmail.com>
wrote:
>
> *The Chinese Room argument is garbage because a magic book with the
> answers to every question isn't real, and if it was, it would already be a
> mind. *Yep, basically the description of a chinese room is exactly what
> our brain is, with the neurons taking the place of the people in the room.
> By the time the Chinese room can answer as a sentient being then room is a
> mind. Not sure why this argument was ever taken seriously.
>
> On Sat, Mar 25, 2023 at 6:25 AM Will Steinberg via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> The Chinese Room argument is garbage because a magic book with the
>> answers to every question isn't real, and if it was, it would already be a
>> mind.
>>
>> I find that often thought experiments with shoddy bounds fail hard. The
>> bound here is the beginning of the experiment, where the situation is
>> already magically in front of us. Where did the book come from? How was
>> it created?
>>
>> Of course it's easy to write out the words for a thought experiment when
>> you invent an object, central to the experiment but of course not the
>> subject of it, that magically does exactly what you need it to do in order
>> to make the experiment. A thought experiment could still have this book in
>> it but it should be the center of the experiment
>>
>> On Fri, Mar 24, 2023, 5:49 AM Jason Resch via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>>
>>> On Fri, Mar 24, 2023, 12:14 AM Gordon Swobe <gordon.swobe at gmail.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Thu, Mar 23, 2023 at 9:37 PM Jason Resch via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>>>
>>>>> There's no way to read this paper:
>>>>> https://arxiv.org/pdf/2303.12712.pdf and come away with the
>>>>> impression that GPT-4 has no idea what it is talking about.
>>>>>
>>>>
>>>> Hmm, nothing in the abstract even remotely suggests to me that GPT-4
>>>> will know word meanings any more than does GPT-3. Eventually AI on digital
>>>> computers will far surpass human intelligence, but even then these
>>>> computers will be manipulating the forms of words and not their meanings.
>>>>
>>>
>>> It seems to me that you have accepted Searle's arguments. I believe I
>>> can knock down his Chinese room argument. If that is what you are using to
>>> base your decision on you should know almost no philosophers or computer
>>> scientists believe his argument holds water. Here's just one of the many
>>> flaws in the argument: there's more than one mind in the room. Ask the room
>>> about its favorite food, or about its experiences as a child. The answers
>>> given will not be Searle's. Change Searle for someone else, the room will
>>> respond the same way. Searle is an interchangeable cog in the machine. Yet
>>> Searle wants us to believe only his opinion matters. In truth, his position
>>> is no different than the "laws of physics" which "mindlessly" computes our
>>> evolving brain state "without any understanding" of what goes on in our
>>> heads. Searle's Chinese room argument works as any great magic trick does:
>>> through misdirection. Ignore the claims made by the man in the room who is
>>> shouting and waving his arms. Since we've established there are two minds
>>> in the room, we can replace Searle with a mindless demon and there still
>>> will be one mind left.
>>>
>>>
>>>
>>>> Do you believe, like my friend who fell in love with a chatbot, that a
>>>> software application can have genuine feelings of love for you?
>>>>
>>>
>>> I think we should defer such a debate until such time we can confidently
>>> define what a "genuine feeling" is and how to implement one.
>>>
>>> Jason
>>>
>>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230325/7317cc79/attachment.htm>
More information about the extropy-chat
mailing list