[ExI] GPT-4 on its inability to solve the symbol grounding problem

Adrian Tymes atymes at gmail.com
Wed Apr 12 23:17:38 UTC 2023


On Wed, Apr 12, 2023 at 3:55 PM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Wed, Apr 12, 2023, 6:16 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Wed, Apr 12, 2023 at 1:06 PM Jason Resch via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> On Wed, Apr 12, 2023, 3:19 PM Adrian Tymes via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>> On Wed, Apr 12, 2023 at 10:25 AM Jason Resch via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>>> What do you think would happen to a person whose visual cortex were
>>>>> replaced with a functionally equivalent silicon computer?
>>>>>
>>>>
>>>> As someone who's worked on this concept, and seen results in patients
>>>> where this - more or less - was actually done:
>>>>
>>>>
>>>>> A) They wouldn't notice and there would be no change in their
>>>>> subjectivity or objectively observable behavior
>>>>> B) They would notice the change in their subjectivity (perhaps
>>>>> noticing a kind of blindness) but they would function the same as before
>>>>> and not say anything
>>>>> C) They would notice the change and they would complain about being
>>>>> blind but would still be able to function as if they can see
>>>>> D) They would notice and become functionally blind, not able to drive,
>>>>> walk without bumping into things, etc.
>>>>> E) Something else
>>>>>
>>>>
>>>> B.  An attempt is made at "perfectly functionally equivalent" but that
>>>> ideal has not been achieved in practice.  There is enough of a difference
>>>> to notice.  That said, in all cases I've seen so far the difference has
>>>> been an improvement - not something worth complaining about.  (Granted, the
>>>> cases I've seen have been replacing a broken cortex or other such
>>>> component, giving sight to the formerly blind.  The "functional
>>>> equivalence" comes in for those who lost their sight, attempting to restore
>>>> what they had.  While there are degrees of blindness one could slide down
>>>> in theory - it is possible for some legally blind people to become more
>>>> blind - I have not seen this happen when this procedure is done.)  I
>>>> suppose that might be more in the spirit of C, since they might comment on
>>>> and compliment the difference, but by the literal wording of the choices B
>>>> is closest to the observed results.
>>>>
>>>> Then again, in the cases I've seen, the difference was the point of the
>>>> replacement.  But the results observed suggest that perfect replacement
>>>> would not happen even for direct replacement.
>>>>
>>>
>>> That's very interesting Adrian. Thanks for sharing your insights.
>>>
>>> What would you imagine would be the outcome if the replacement were
>>> "perfectly functionally equivalent" and performed in a normally sighted
>>> person?
>>>
>>
>> B.  "Perfect" wouldn't be perfect in practice.  There'd be enough
>> difference to notice but it would not be significantly negative.
>>
>
> I think "B" is impossible: if the functional substitution is perfect there
> is no room for the person to notice any difference in their experience. And
> if they did notice a difference they should be able to talk about it, but
> option B says they're unable to mention any difference in their perception,
> as all their outwardly visible behavior is unchanged.
>

B says no such thing.  B says they _would_ (not _could_) not say anything.
As in, they have the option to but choose not to.  Presumably this is
because they have no reason to, e.g. if their vision is close enough to
before or better than it was.

A is theoretically possible but seems like it might be merely practically
impossible.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230412/41893eb0/attachment.htm>


More information about the extropy-chat mailing list