<div><br></div><div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, 21 May 2022 at 19:27, Stuart LaForge via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
Quoting Brent Allsop:<br>
<br>
<br>
> No, only the popular consensus functionalists, led by Chalmers, with <br>
> his derivative and mistaken "substitution argument" work. results in <br>
> them thinking it is a hard problem, leading the whole world astray. <br>
> The hard problem would be solved by now, if it wasn't for all that.<br>
> If you understand why the substitution argument is a mistaken <br>
> sleight of hand, that so-called "hard problem" goes away. All the <br>
> stuff like What is it like to be a bat, how do you bridge the <br>
> explanatory gap, and all that simply fall away, once you know the <br>
> colorness quality of something.<br>
<br>
I would probably be lumped in with the functionalists since I think <br>
intelligence is a literal mathematical fitness function on tensors <br>
being optimized by having their partial derivatives minimized by <br>
gradient descent against environmental parameters. In the brain, these <br>
tensors represent the relative weights and bias of the neurons in the <br>
neural network. I am toying with calling these tensor functions SELFs <br>
for scalable epistemic learning functions.<br>
<br>
That being said, I have issues with the substitution argument. For one <br>
thing, the larger a network gets, the more information lies between <br>
nodes relative to information within nodes. That is to say that <br>
relationships between components increase in importance relative to <br>
the components themselves. In my theory, this is the essence of <br>
emergence.<br>
<br>
It might intuitively aid the understanding of my argument to examine a <br>
higher order network. The substitution argument suggests that a small <br>
part of my brain could be replaced by a functionally identical <br>
artificial part, and I would not be able to tell the difference. The <br>
problem with this argument is that the function of any neuron or <br>
neural circuit of the brain is not determined solely by the properties <br>
of the neuron or neural circuit, but by its holistic relationship with <br>
all the other neurons it is connected to. So not only could an <br>
artificial neuron not be an "indistinguishable substitute" for the <br>
native neuron, but even another identical biological neuron would not <br>
be a sufficient replacement unless it was somehow grown or developed <br>
in the context of a brain identical to yours.</blockquote><div dir="auto"><br></div><div dir="auto">“Functionally identical” means that the replacement interacts with the remaining tissue exactly the same way as the original did. If it doesn’t, then it isn’t functionally identical.</div><div dir="auto"><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex" dir="auto">
It might be more intuitively obvious to consider your family, than a <br>
brain. If you were instantaneously replaced with a clone of yourself, <br>
even if that clone had been trained in your memories up until let's <br>
say last month, your family would notice some pretty jarring <br>
differences between you and your clone. Those problems could <br>
eventually go away as your family adapted to your clone, and your <br>
clone adapted to your family, but the actual replacement itself would <br>
be obvious to your family when it occurred.</blockquote><div dir="auto"><br></div><div dir="auto">How would your family notice a difference if your behaviour were exactly the same?</div><div dir="auto"><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex" dir="auto">
Similarly, an artificial replacement neuron/neural circuit (or even a <br>
biological one) would have to undergo "on the job training" to <br>
sufficiently substitute for the component it was replacing. And if the <br>
circuit was extensive enough, you and the people around you would <br>
notice a difference.</blockquote><div dir="auto"><br></div><div dir="auto">Technical difficulty is not a problem in a thought experiment. The argument is that IF a part of your brain were replaced with a functionally identical analogue THEN your consciousness would necessarily be preserved.</div><div dir="auto"><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex" dir="auto"></blockquote><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex" dir="auto">
> And I don't really know much about the problem of universals. I <br>
> just know that we live in a world full of LOTS of colourful things, <br>
> yet all we know are the colors things seem to be. Nobody yet knows <br>
> the true intrinsic colorness quality of anything. The emerging <br>
> consensus Representational Qualia Theory, and all the supporters of <br>
> all the sub camps are predicting once we discover which of all our <br>
> descriptions of stuff in the brain is a description of redness, this <br>
> will falsify all but THE ONE camp finally demonstrated to be true. <br>
> All the supporters of all the falsified camps will then be seen <br>
> jumping to this one yet to be falsified camp. We are tracking all <br>
> this in real time, and already seeing significant progress. In <br>
> other words, there will be irrefutable consensus proof that the <br>
> 'hard problem' has finally been resolved. I predict this will <br>
> happen within 10 years. Anyone care to make a bet, that THE ONE <br>
> camp will have over 90% "Mind Expert consensus", and there will be > <br>
> 1000 experts in total, participating, within 10 years?<br>
<br>
Consensus is simply consensus; it is not proof. The majority and even <br>
the totality have been wrong about a great deal many things over the <br>
long span of history.<br>
<br>
>> <br>
>>> First, we must recognize that redness is not an intrinsic quality of the<br>
>>> strawberry, it is a quality of our knowledge of the strawberry in our<br>
>>> brain. This must be true since we can invert our knowledge by simply<br>
>>> inverting any transducing system anywhere in the perception process.<br>
>>> If we have knowledge of a strawberry that has a redness quality, and if we<br>
>>> objectively observed this redness in someone else's brain, and fully<br>
>>> described that redness, would that tell us the quality we are describing?<br>
>>> No, for the same reason you can't communicate to a blind person what<br>
>>> redness is like.<br>
>><br>
>> Why not? If redness is not intrinsic to the strawberry but is instead <br>
>> a quality of our knowledge of the strawberry, then why can't we <br>
>> explain to a blind person what redness is like? Blind people have <br>
>> knowledge of strawberries and plenty of glutamate in their brains. <br>
>> Just tell them that redness is what strawberries are like, and they <br>
>> will understand you just fine.<br>
><br>
> Wait, what? No you can't. Sure, maybe if they've been sighted, <br>
> seen a strawberry, with their eyes, (i.e. directly experienced <br>
> redness knowledge) then became blind. They will be able to kind of <br>
> remember what that redness was like, but the will no longer be able <br>
> to experience it.<br>
<br>
But how does the experience of redness in the sighted change the <br>
glutamate (or whatever representational "stuff" you hypothesize) <br>
versus the glutamate of the blind? Surely you can see my point that <br>
redness must be learned, and the brains of the color-learned are <br>
chemically indistinguishable from the brains of the blind. And if <br>
there were any representational "stuff", then it would lie in the <br>
difference between the brains of the sighted and the blind. I would <br>
posit that any such difference would lie in the neural wiring and <br>
synaptic weights which would be chemically indistinguishable but <br>
structurally and functionally distinct.<br>
<br>
>> <br>
>>> The entirety of our objective knowledge tells us nothing<br>
>>> of the intrinsic qualities of any of that stuff we are describing.<br>
>><br>
>> Ok, but you just said that redness was not an intrinsic quality of <br>
>> strawberries but of our knowledge of them, so our objective knowledge <br>
>> of them should be sufficient to describe redness.<br>
><br>
> Sure, it is sufficient, but until you know which sufficient <br>
> description is a description of redness, and which sufficient <br>
> description is a description of greenness, we won't know which is <br>
> which.<br>
<br>
We don't need to know anything as long as we are constantly learning. <br>
If you woke up up tomorrow and everything that was red looked green to <br>
you, at first you would be confused, but after a week, your would <br>
adapt and be functionally equivalent to now. You might even eventually <br>
even forget there ever was a switch.<br>
<br>
>><br>
>> So if this "stuff" is glutamate, glycine, or whatever, and it exists <br>
>> in the brains of blind people, then why can't it represent redness (or <br>
>> greenness) information to them also?<br>
><br>
> People may be able to dream redness. Or they may take some <br>
> psychedelics that enables them to experience redness, or surgeons <br>
> may stimulate a part of the brain, while doing brain surgery, <br>
> producing a redness experience, Those rare cases are possible, But <br>
> that isn't yet normal. Once they discover which of all our <br>
> descriptions of stuff in the brain is a description of redness, <br>
> someone like Neuralink will be producing that redness quality in <br>
> blind people's brains all the time, with artificial eyes, and so <br>
> on. But to date, normal blind people can't experience redness <br>
> quality.<br>
<br>
Sure they can, even if it just through a frequency of sound output by <br>
an Orcam MyEye. You learn what redness is by some manner of perception <br>
and how you perceive it does not matter. Synesthetics might even be <br>
able to taste or small redness.<br>
<br>
>> <br>
>>><br>
>>><br>
>>>>> This is true if that stuff is some kind of "Material" or "electromagnetic<br>
>>>>> field" "spiritual" or "functional" stuff, it remains a fact that your<br>
>>>>> knowledge, composed of that, has a redness quality.<br>
>>>><br>
>>>> It seems you are quite open-minded when it comes to what qualifies as<br>
>>>> "stuff". If so, then why does your 3-robot-scenario single out<br>
>>>> information as not being stuff? If you wish to insist that something<br>
>>>> physical in the brain has the redness quality and conveys knowledge of<br>
>>>> redness, then why glutamate? Why not instead hypothesize that is the<br>
>>>> only thing that prima facie has the redness property to begin with<br>
>>>> i.e. red light? After all there are photoreceptors in the deep brain.<br>
>>>><br>
>>><br>
>>> Any physical property like redness, greenness, +5votes, holes in a punch<br>
>>> card... can represent (convey) an abstract 1. There must be something<br>
>>> physical representing that one, but, again, you can't know what that is<br>
>>> unless you have a transducing dictionary telling you which is which.<br>
>><br>
>> You may need something physical to represent the abstract 1, but that <br>
>> abstract 1 in turn represents some different physical thing.<br>
><br>
> Only if you have a transducing dictionary that enables such, or you <br>
> think of it in that particular way. Other than that, it's just a <br>
> set of physical facts, which can be interpreted as something else, <br>
> that is all.<br>
<br>
A transducing dictionary is not enough. Something has to read the <br>
dictionary, and all meaning is relative. If your wife is lost in the <br>
jungle, then her cries for help would mean something very different to <br>
you than they would to a hungry tiger. In communication, it takes <br>
meaning to understand meaning. The reason you can understand abstract <br>
information is because you yourself are abstract information.<br>
<br>
Stuart LaForge<br>
<br>
<br>
_______________________________________________<br>
extropy-chat mailing list<br>
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a><br>
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer" target="_blank">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br>
</blockquote></div></div>-- <br><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature">Stathis Papaioannou</div>