[ExI] Symbol Grounding

Jason Resch jasonresch at gmail.com
Mon Apr 24 05:14:40 UTC 2023


On Sun, Apr 23, 2023 at 9:03 PM Brent Allsop <brent.allsop at gmail.com> wrote:

>
> Hi Jason,
>
> Yes, I thought I replied to this already.  But maybe I never finished it.
> Stathis Papuainu (I CC'd him.  He is a brilliant former member of this
> list, I think everyone here would agree is almost as cool, calm, and
> collected as you ;), who is another functionalist, pointed me to that
> paper, over a decade ago.
>

Hi Stathis. :-)

I know him from the everything list.



> We've been going at it, ever since.  And that paper is derivative of
> previous works by Hans Moravec.  I first read Morovec's description of that neural
> substitution argument
> <https://canonizer.com/topic/79-Neural-Substitn-Argument/1-Agreement> in his
> book <https://www.hup.harvard.edu/catalog.php?isbn=9780674576186>, back
> in the 90s.  I've been thinking about it ever since.
>

A sketch of the neural substitution argument was introduced by Moravec in
his 1988, but Chalmers paper I think goes much deeper, by asking, what
would happen during the gradual replacement, and considering the space of
possibilities, and further, why functional various strongly suggests qualia
must also be preserved (the dancing qualia part of his thought experiment).


> Chalmers admits, in that paper, that one possibility is that the
> substitution will fail.  This is what we are predicting, that when you get
> to the first pixel, which has a redness quality, when you try to
> substitute it with something that does not have a redness quality, you will
> not be able to progress beyond that point.  Kind of a tautology, actually.
>

Okay, this is great progress. I think it may indicate a departure between
you and Gordon. I *think* Gordon believes it is possible for a computer to
perfectly replicate the behavior of a person, but that it would not be
conscious. This position was called "Weak AI" by Searle. Searle believes AI
in principle can do anything a human can, but that without the right causal
properties it would not be conscious.

>From the above, it sounds to me as if you are in the camp of Penrose, the
non-computable physics camp: what the brain does cannot be explained in
terms of finite, describable, computable rules. The brain's range of
behaviors transcends what can be computed. Is this an accurate description
of your view?


>
> Also, I have pointed out to Stathis, and other functionalists, a gazillion
> times over the years, that even a substrate independent function, couldn't
> be responsible for redness, for the same mistaken (assumes the substitution
> will succeed) reasoning.
>

You have said it a gazillion times, yes, but what is the reason that a
substrate independent function couldn't be responsible for redness? I know
you believe this, but why do you believe it? What is your argument or
justification?


>   The neural substitution argument proves it can't be functionalism,
> either.
>

I think you might mean: The assumption that organizationally invariant
neural substitution is not possible, implies some functions are not
substrate independent (which implies computationalism is false). But this
does not follow from the argument, it follows from an assumption about the
outcome of the experiment described in the argument.


>   All the neural substitution proves is that NOTHING can have a redness
> quality, which of course, is false.
>

What do you think you would feel as neurons in your visual cortex were
replaced one by one with artificial silicon ones? Would you notice things
slowly start to change in your perception? Would you mention the change out
loud and seek medical attention? How would this work mechanistically? Do
you see it as a result of the artificial neurons having different firing
patterns which are different from the biological ones (and which cannot be
replicated)?



> So this proves the thought experiment must have a bad assumption.
>
Which of course is the false assumption that the substitution will succeed.
>

1. Do you think everything in the brain operates according to the laws of
physics?
2. What laws or objects in physics cannot be simulated by a computer?
3. How are the items (if any) mentioned in 2 related to the functions of
the brain?

Jason


> All this described in the Neural Substitution Fallacy camp
> <https://canonizer.com/topic/79-Neural-Substitn-Argument/2-Neural-Substtn-Fallacy>.
> Which, for some reason, has no competing camp.
>
>
>
>
>
>
>
>
>
>
> On Sun, Apr 23, 2023 at 7:40 PM Jason Resch <jasonresch at gmail.com> wrote:
>
>>
>>
>> On Sun, Apr 23, 2023 at 8:27 PM Brent Allsop via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>>
>>> On Sun, Apr 23, 2023 at 4:43 PM Stuart LaForge via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>>
>>>> Quoting Brent Allsop via extropy-chat <extropy-chat at lists.extropy.org>:
>>>>
>>>> > This is so frustrating.  I'm asking a simple, elementary school level
>>>> > question.
>>>>
>>>> So you think that the Hard Problem of Consciousness reframed as a your
>>>> so-called "Colorness Problem" is an elementary school level question?
>>>> Then maybe you should quit bugging us about it and seek the advice of
>>>> elementary school children.
>>>>
>>>
>>> I am working with those people that do get it.  Now, more than 40 of
>>> them, including leaders in the field like Steven Lehar
>>> <https://canonizer.com/topic/81-Mind-Experts/4-Steven-Lehar>, are
>>> supporting the camp that says so.  Even Dennett's Predictive Bayesian
>>> coding Theory
>>> <https://canonizer.com/topic/88-Theories-of-Consciousness/21-Dennett-s-PBC-Theory>
>>> is a supporting sub camp, demonstrating the progress we are making.
>>> Gordon, would you be willing to support RQT
>>> <https://canonizer.com/topic/88-Theories-of-Consciousness/6-Representational-Qualia>?
>>> The elementary school kids are telling us, plug things into the brain, till
>>> you find what it is that has a redness quality.  So, we are collecting the
>>> signature, and once we get enough, experimentalists will finally get the
>>> message and then start doing this, and eventually be able to demonstrate to
>>> everyone what it is that has a  [image: red_border.png] property.  To
>>> my understanding, that is how science works.
>>>
>>>
>>> The reason I am bugging you functionalists is because I desperately want
>>> to understand how everyone thinks about consciousness, especially the
>>> leading popular consensus functionalism camps. Giovani seems to be saying
>>> that in this functionalist view, there is no such thing as color qualities,
>>> but to me, saying there is no color in the world is just insane.  You seem
>>> to be at least saying something better than that, but as far as I can see,
>>> your answers are just more interpretations of interpretations, no place is
>>> there any grounding.   You did get close to a grounded answer when I asked
>>> how the word 'red' can be associated with [image: green_border.png].
>>> Your reply was  "at some point during the chatbot's training the English
>>> word red was associated with *the picture in question*."   But "*the
>>> picture in question*" could be referring to at least 4 different
>>> things.  It could be associated with the LEDs emitting the 500 nm light.
>>> It could be the 500 nm light, which "the picture" is emitting, or it could
>>> be associated with your knowledge of   [image: green_border.png]. in
>>> which case it would have the same quality as your knowledge of that, or it
>>> could be associated with someone that was engineered to be your inverted
>>> knowledge (has a red / green signal inverter between its retina and optic
>>> nerve), in which case, it would be like your knowledge of [image:
>>> red_border.png].  So, if that is indeed your answer, which one of these
>>> 4 things are you referring to?  Is it something else?
>>>
>>>
>>> You guys accuse me of being non scientific.  But all I want to know is
>>> how would a functionalist demonstrate, or falsify functionalist claims
>>> about color qualities, precisely because I want to be scientific.  Do you
>>> believe you have explained how functionalism predictions about color
>>> qualities could be falsified or demonstrated, within functionalist
>>> doctrines?   If so, I haven't seen it yet.
>>>
>>
>> I've suggested several times that you read Chalmers Fading/Dancing qualia
>> thought experiment. Have you done this? What is your interpretation of it?
>>
>> https://consc.net/papers/qualia.html
>>
>> Jason
>>
>>
>>> So please help, as all I see is you guys saying, over and over again,
>>> that you don't need to provide an unambiguous way to demonstrate what it is
>>> that has this quality: [image: red_border.png], or even worse
>>> functionalism is predicting that color doesn't exist.  As if saying things
>>> like that, over and over again, makes them true?
>>>
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230424/4474c689/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: green_border.png
Type: image/png
Size: 161 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230424/4474c689/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: red_border.png
Type: image/png
Size: 187 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230424/4474c689/attachment-0001.png>


More information about the extropy-chat mailing list