[ExI] GPT-4 on its inability to solve the symbol grounding problem

Giovanni Santostasi gsantostasi at gmail.com
Tue Apr 11 06:46:13 UTC 2023


*AIs running on digital computers will always be unconscious tools of
humanity, no different in principle from GPT-4 which is already telling us
the truth about the matter if only people would listen.*This is a purely
dogmatic and religious statement with absolutely no backing from a
scientific perspective.
Everything we know about biology, physics, and computational science points
to the contrary. But ok you can continue to believe this nonsense if it
makes you feel good.
Giovanni

On Mon, Apr 10, 2023 at 11:26 PM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> I wrote that I believe "AIs running on digital computers will always be
> unconscious tools of humanity, no different in principle from GPT-4 which
> is already telling us the truth about the matter if only people would
> listen."
>
> By GPT-4 already telling the truth about the matter, I mean for example
> that it reports that it cannot solve the symbol grounding problem for
> itself and can report why it cannot solve it. I do not believe that GPT-N
> will be able to solve it for itself, either, assuming it is running on a
> digital computer like GPT-4.
>
> -gts
>
>
>
> On Mon, Apr 10, 2023 at 11:29 PM Gordon Swobe <gordon.swobe at gmail.com>
> wrote:
>
>> On Mon, Apr 10, 2023 at 5:36 PM Jason Resch via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>> > I think this explains a lot. You have a mistaken impression of what
>> computationalism is if you think computationalism is the same thing as the
>> computer metaphor.
>>
>> There are many more aspects to my argument, but that is where it starts.
>> The brain/mind is not fundamentally a computer or information processing
>> machine.
>>
>> In summary, computationalism is *not* the idea that the human brain
>>> operates like a computer, but rather, that a computer can be made to
>>> operate *like a human brain*.
>>>
>>
>> Yes, that is the doctrine which might as well be hanging on the front
>> door of ExI.
>>
>>
>>> We know from the Church-Turing thesis that computers can replicate the
>>> operations of any finitely describable system.
>>>
>>
>> In my view, we can simulate the brain in a computer, similar to how a
>> meteorologist might simulate a hurricane on a computer , but unless we live
>> in a digital simulation ourselves (another religious doctrine), the
>> simulation is not the same as the thing simulatied.
>>
>> > Ignoring present GPTs, do you believe it is possible in principle to
>> build an AI super intelligence? One able to reason independently to such a
>> high degree that it's able to invent new technologies and conceive of new
>> scientific discoveries entirely on its own?
>>
>> Sure. But I do not believe the superintelligence or AGI will know about
>> it any more than does my pocket calculator know the results of its
>> calculations. AIs running on digital computers will always be unconscious
>> tools of humanity, no different in principle from GPT-4 which is already
>> telling us the truth about the matter if only people would listen.
>>
>> -gts
>>
>> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230410/1c57b635/attachment.htm>


More information about the extropy-chat mailing list