[ExI] Substitution argument was Re: Is Artificial Life Conscious?

Stuart LaForge avant at sollegro.com
Sat May 21 09:26:02 UTC 2022


Quoting Brent Allsop:


> No, only the popular consensus functionalists, led by Chalmers, with  
> his derivative and mistaken "substitution argument" work. results in  
> them thinking it is a hard problem, leading the whole world astray.   
> The hard problem would be solved by now, if it wasn't for all that.
> If you understand why the substitution argument is a mistaken  
> sleight of hand, that so-called "hard problem" goes away.  All the  
> stuff like What is it like to be a bat, how do you bridge the  
> explanatory gap, and all that simply fall away, once you know the  
> colorness quality of something.

I would probably be lumped in with the functionalists since I think  
intelligence is a literal mathematical fitness function on tensors  
being optimized by having their partial derivatives minimized by  
gradient descent against environmental parameters. In the brain, these  
tensors represent the relative weights and bias of the neurons in the  
neural network. I am toying with calling these tensor functions SELFs  
for scalable epistemic learning functions.

That being said, I have issues with the substitution argument. For one  
thing, the larger a network gets, the more information lies between  
nodes relative to information within nodes. That is to say that  
relationships between components increase in importance relative to  
the components themselves. In my theory, this is the essence of  
emergence.

It might intuitively aid the understanding of my argument to examine a  
higher order network. The substitution argument suggests that a small  
part of my brain could be replaced by a functionally identical  
artificial part, and I would not be able to tell the difference. The  
problem with this argument is that the function of any neuron or  
neural circuit of the brain is not determined solely by the properties  
of the neuron or neural circuit, but by its holistic relationship with  
all the other neurons it is connected to. So not only could an  
artificial neuron not be an "indistinguishable substitute" for the  
native neuron, but even another identical biological neuron would not  
be a sufficient replacement unless it was somehow grown or developed  
in the context of a brain identical to yours.

It might be more intuitively obvious to consider your family, than a  
brain. If you were instantaneously replaced with a clone of yourself,  
even if that clone had been trained in your memories up until let's  
say last month, your family would notice some pretty jarring  
differences between you and your clone. Those problems could  
eventually go away as your family adapted to your clone, and your  
clone adapted to your family, but the actual replacement itself would  
be obvious to your family when it occurred.

Similarly, an artificial replacement neuron/neural circuit (or even a  
biological one) would have to undergo "on the job training" to  
sufficiently substitute for the component it was replacing. And if the  
circuit was extensive enough, you and the people around you would  
notice a difference.

> And I don't really know much about the problem of universals.  I  
> just know that we live in a world full of LOTS of colourful things,  
> yet all we know are the colors things seem to be.  Nobody yet knows  
> the true intrinsic colorness quality of anything.  The emerging  
> consensus Representational Qualia Theory, and all the supporters of  
> all the sub camps are predicting once we discover which of all our  
> descriptions of stuff in the brain is a description of redness, this  
> will falsify all but THE ONE camp finally demonstrated to be true.   
> All the supporters of all the falsified camps will then be seen  
> jumping to this one yet to be falsified camp.  We are tracking all  
> this in real time, and already seeing significant progress.  In  
> other words, there will be irrefutable consensus proof that the  
> 'hard problem' has finally been resolved.  I predict this will  
> happen within 10 years.  Anyone care to make a bet, that THE ONE  
> camp will have over 90% "Mind Expert consensus", and there will be >  
> 1000 experts in total, participating, within 10 years?

Consensus is simply consensus; it is not proof. The majority and even  
the totality have been wrong about a great deal many things over the  
long span of history.
 
>>   
>>> First, we must recognize that redness is not an intrinsic quality of the
>>> strawberry, it is a quality of our knowledge of the strawberry in our
>>> brain.  This must be true since we can invert our knowledge by simply
>>> inverting any transducing system anywhere in the perception process.
>>> If we have knowledge of a strawberry that has a redness quality, and if we
>>> objectively observed this redness in someone else's brain, and fully
>>> described that redness, would that tell us the quality we are describing?
>>> No, for the same reason you can't communicate to a blind person what
>>> redness is like.
>>
>> Why not? If redness is not intrinsic to the strawberry but is instead 
>> a quality of our knowledge of the strawberry, then why can't we 
>> explain to a blind person what redness is like? Blind people have 
>> knowledge of strawberries and plenty of glutamate in their brains. 
>> Just tell them that redness is what strawberries are like, and they 
>> will understand you just fine.
>
> Wait, what?  No you can't.  Sure, maybe if they've been sighted,  
> seen a strawberry, with their eyes, (i.e. directly experienced  
> redness knowledge) then became blind.  They will be able to kind of  
> remember what that redness was like, but the will no longer be able  
> to experience it.

But how does the experience of redness in the sighted change the  
glutamate (or whatever representational "stuff" you hypothesize)  
versus the glutamate of the blind? Surely you can see my point that  
redness must be learned, and  the brains of the color-learned are  
chemically indistinguishable from the brains of the blind. And if  
there were any representational "stuff", then it would lie in the  
difference between the brains of the sighted and the blind. I would  
posit that any such difference would lie in the neural wiring and  
synaptic weights which would be chemically indistinguishable but  
structurally and functionally distinct.
   
>>   
>>> The entirety of our objective knowledge tells us  nothing
>>> of the intrinsic qualities of any of that stuff we are describing.
>>
>> Ok, but you just said that redness was not an intrinsic quality of 
>> strawberries but of our knowledge of them, so our objective knowledge 
>> of them should be sufficient to describe redness.
>
> Sure, it is sufficient, but until you know which sufficient  
> description is a description of redness, and which sufficient  
> description is a description of greenness, we won't know which is  
> which.

We don't need to know anything as long as we are constantly learning.  
If you woke up up tomorrow and everything that was red looked green to  
you, at first you would be confused, but after a week, your would  
adapt and be functionally equivalent to now. You might even eventually  
even forget there ever was a switch.
   
>>
>> So if this "stuff" is glutamate, glycine, or whatever, and it exists 
>> in the brains of blind people, then why can't it represent redness (or 
>> greenness) information to them also?
>
> People  may be able to dream redness.  Or they may take some  
> psychedelics that enables them to experience redness, or surgeons  
> may stimulate a part of the brain, while doing brain surgery,  
> producing a redness experience,  Those rare cases are possible,  But  
> that isn't yet normal.  Once they discover which of all our  
> descriptions of stuff in the brain is a description of redness,  
> someone like Neuralink will be producing that redness quality in  
> blind people's brains all the time, with artificial eyes, and so  
> on.  But to date, normal blind people can't experience redness  
> quality.

Sure they can, even if it just through a frequency of sound output by  
an Orcam MyEye. You learn what redness is by some manner of perception  
and how you perceive it does not matter. Synesthetics might even be  
able to taste or small redness.
   
>>   
>>>
>>>
>>>>> This is true if that stuff is some kind of "Material" or "electromagnetic
>>>>> field" "spiritual" or "functional" stuff, it remains a fact that your
>>>>> knowledge, composed of that, has a redness quality.
>>>>
>>>> It seems you are quite open-minded when it comes to what qualifies as
>>>> "stuff". If so, then why does your 3-robot-scenario single out
>>>> information as not being stuff? If you wish to insist that something
>>>> physical in the brain has the redness quality and conveys knowledge of
>>>> redness, then why glutamate? Why not instead hypothesize that is the
>>>> only thing that prima facie has the redness property to begin with
>>>> i.e. red light? After all there are photoreceptors in the deep brain.
>>>>
>>>
>>> Any physical property like redness, greenness, +5votes, holes in a punch
>>> card... can represent (convey) an abstract 1.  There must be something
>>> physical representing that one, but, again, you can't know what that is
>>> unless you have a transducing dictionary telling you which is which.
>>
>> You may need something physical to represent the abstract 1, but that 
>> abstract 1 in turn represents some different physical thing.
>
> Only if you have a transducing dictionary that enables such, or you  
> think of it in that particular way.  Other than that, it's just a  
> set of physical facts, which can be interpreted as something else,  
> that is all.

A transducing dictionary is not enough. Something has to read the  
dictionary, and all meaning is relative. If your wife is lost in the  
jungle, then her cries for help would mean something very different to  
you than they would to a hungry tiger. In communication, it takes  
meaning to understand meaning.  The reason you can understand abstract  
information is because you yourself are abstract information.

Stuart LaForge




More information about the extropy-chat mailing list