[ExI] GPT-4 on its inability to solve the symbol grounding problem

Brent Allsop brent.allsop at gmail.com
Sat Apr 8 21:55:38 UTC 2023


On Sat, Apr 8, 2023 at 3:21 PM Gordon Swobe via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Sat, Apr 8, 2023 at 2:57 PM Jason Resch via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Do we agree then that your doorbell system has some trivial degree of
>> awareness?
>>
>
> Here's the rub: if my smart doorbell has awareness and if awareness is
> what defines consciousness, then how about my automobile?  It seems aware
> of my turning the key in the ignition and starts as if by magic.  Where
> does it end? If there is no end then you are talking about panpsychism in
> which everything in the world is aware and conscious. That's fine, but in
> such a world, what do you call that thing that a boxer loses when he is
> knocked unconscious? That is what I mean by consciousness.
>

But what does that mean?
The boxer has visual conscious knowledge of what he sees, composed of lots
of pixels of knowledge, each of which has a colerness quality.
You can take the pixels, one at a time, and stop them from being
computationally bound in with the rest.  When this happens, he is no longer
consciously aware of the one pixel.
Then proceed to disconnect every other bit of phenomenal knowledge, till he
is only aware of two pixels, one composed of a pixel of redness from his
opponents shorts, and the other, a single pixel of grenness, from his own
shorts.  As long as he has two computational bound qualities, he fits the
definition of phenomenally conscious.  But when those last two are no
longer computationally bound.  Then he is no longer conscious.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230408/1426aade/attachment.htm>


More information about the extropy-chat mailing list