[ExI] Substitution argument was Re: Is Artificial Life Conscious?

Brent Allsop brent.allsop at gmail.com
Mon May 23 00:00:55 UTC 2022


First off, thank you Stuart, and everyone else that has indulged me so
extensively in conversations like this.
I know you guys are surely way smarter than I am on most everything, and it
is very interesting to see how intelligent people think about this kind of
stuff.




On Sat, May 21, 2022 at 3:27 AM Stuart LaForge via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> Quoting Brent Allsop:
>
>
> > No, only the popular consensus functionalists, led by Chalmers, with
> > his derivative and mistaken "substitution argument" work. results in
> > them thinking it is a hard problem, leading the whole world astray.
> > The hard problem would be solved by now, if it wasn't for all that.
> > If you understand why the substitution argument is a mistaken
> > sleight of hand, that so-called "hard problem" goes away.  All the
> > stuff like What is it like to be a bat, how do you bridge the
> > explanatory gap, and all that simply fall away, once you know the
> > colorness quality of something.
>
> I would probably be lumped in with the functionalists since I think
> intelligence is a literal mathematical fitness function on tensors
> being optimized by having their partial derivatives minimized by
> gradient descent against environmental parameters. In the brain, these
> tensors represent the relative weights and bias of the neurons in the
> neural network. I am toying with calling these tensor functions SELFs
> for scalable epistemic learning functions.
>
> That being said, I have issues with the substitution argument. For one
> thing, the larger a network gets, the more information lies between
> nodes relative to information within nodes. That is to say that
> relationships between components increase in importance relative to
> the components themselves. In my theory, this is the essence of
> emergence.
>
> It might intuitively aid the understanding of my argument to examine a
> higher order network. The substitution argument suggests that a small
> part of my brain could be replaced by a functionally identical
> artificial part, and I would not be able to tell the difference. The
> problem with this argument is that the function of any neuron or
> neural circuit of the brain is not determined solely by the properties
> of the neuron or neural circuit, but by its holistic relationship with
> all the other neurons it is connected to. So not only could an
> artificial neuron not be an "indistinguishable substitute" for the
> native neuron, but even another identical biological neuron would not
> be a sufficient replacement unless it was somehow grown or developed
> in the context of a brain identical to yours.
>
> It might be more intuitively obvious to consider your family, than a
> brain. If you were instantaneously replaced with a clone of yourself,
> even if that clone had been trained in your memories up until let's
> say last month, your family would notice some pretty jarring
> differences between you and your clone. Those problems could
> eventually go away as your family adapted to your clone, and your
> clone adapted to your family, but the actual replacement itself would
> be obvious to your family when it occurred.
>
> Similarly, an artificial replacement neuron/neural circuit (or even a
> biological one) would have to undergo "on the job training" to
> sufficiently substitute for the component it was replacing. And if the
> circuit was extensive enough, you and the people around you would
> notice a difference.
>

I read this, before I read Stathis response to this.  And I fully expected
him to jump in and reply, exactly as he did.
And I happen to agree with what stathis is replying in response to this.
He is saying it far better than I could have said it.



> > And I don't really know much about the problem of universals.  I
> > just know that we live in a world full of LOTS of colourful things,
> > yet all we know are the colors things seem to be.  Nobody yet knows
> > the true intrinsic colorness quality of anything.  The emerging
> > consensus Representational Qualia Theory, and all the supporters of
> > all the sub camps are predicting once we discover which of all our
> > descriptions of stuff in the brain is a description of redness, this
> > will falsify all but THE ONE camp finally demonstrated to be true.
> > All the supporters of all the falsified camps will then be seen
> > jumping to this one yet to be falsified camp.  We are tracking all
> > this in real time, and already seeing significant progress.  In
> > other words, there will be irrefutable consensus proof that the
> > 'hard problem' has finally been resolved.  I predict this will
> > happen within 10 years.  Anyone care to make a bet, that THE ONE
> > camp will have over 90% "Mind Expert consensus", and there will be >
> > 1000 experts in total, participating, within 10 years?
>
> Consensus is simply consensus; it is not proof. The majority and even
> the totality have been wrong about a great deal many things over the
> long span of history.
>

Oh, of course.  I just enjoy knowing this and tracking as much of this as
possible, and I believe that which you measure, improves.


>
> >>
> >>> First, we must recognize that redness is not an intrinsic quality of
> the
> >>> strawberry, it is a quality of our knowledge of the strawberry in our
> >>> brain.  This must be true since we can invert our knowledge by simply
> >>> inverting any transducing system anywhere in the perception process.
> >>> If we have knowledge of a strawberry that has a redness quality, and
> if we
> >>> objectively observed this redness in someone else's brain, and fully
> >>> described that redness, would that tell us the quality we are
> describing?
> >>> No, for the same reason you can't communicate to a blind person what
> >>> redness is like.
> >>
> >> Why not? If redness is not intrinsic to the strawberry but is instead
> >> a quality of our knowledge of the strawberry, then why can't we
> >> explain to a blind person what redness is like? Blind people have
> >> knowledge of strawberries and plenty of glutamate in their brains.
> >> Just tell them that redness is what strawberries are like, and they
> >> will understand you just fine.
> >
> > Wait, what?  No you can't.  Sure, maybe if they've been sighted,
> > seen a strawberry, with their eyes, (i.e. directly experienced
> > redness knowledge) then became blind.  They will be able to kind of
> > remember what that redness was like, but the will no longer be able
> > to experience it.
>
> But how does the experience of redness in the sighted change the
> glutamate (or whatever representational "stuff" you hypothesize)
> versus the glutamate of the blind? Surely you can see my point that
> redness must be learned, and  the brains of the color-learned are
> chemically indistinguishable from the brains of the blind. And if
> there were any representational "stuff", then it would lie in the
> difference between the brains of the sighted and the blind. I would
> posit that any such difference would lie in the neural wiring and
> synaptic weights which would be chemically indistinguishable but
> structurally and functionally distinct.
>

I don't think of it this way.   One thing is for sure, there is a 3D model
of visual reality, somewhere in your brain.  It is this model that has the
colorness qualities, and it must be that it is this model that your brain
does all the intelligent thinking and recognition with/about, post
perception process, not the useless raw data coming from the senses.  It is
the perception system that has all the "neural wiring", and the networks
that do the recognition on the rendered models.  But I would seriously
doubt there are any " neural wirings or synaptic weights" in the actual
representations with colorness qualities, themselves.

Teslas weren't able to be very intelligent, until they also started
rendering 3D model (including time) representations, then having all the
intelligence work on these (abstract) rendered 3D models.  Our perception
system renders this 3d model, with non abstract colorness qualities, into
your consciousness.  Sure, you learn to use glutamat (or something), which
has the redness quality to represent a point on this model that has a
redness quality.  Your consciousness doesn't change the glutamate, Your
brain just uses glutamate when it wants to render knowledge of a 3D voxel
of knowledge with a redness quality into consciousness.

Let me say it this way.  YOU cannot experience redness, with your eyes
closed.  Your memory of recalled redness is even far less than what you
experience (not glutamate) when looking at something red.  And if not for
your memory of glutamate, YOU would not be able to find out what redness
was like, with your eyes closed, under normal operation.


> >>
> >>> The entirety of our objective knowledge tells us  nothing
> >>> of the intrinsic qualities of any of that stuff we are describing.
> >>
> >> Ok, but you just said that redness was not an intrinsic quality of
> >> strawberries but of our knowledge of them, so our objective knowledge
> >> of them should be sufficient to describe redness.
> >
> > Sure, it is sufficient, but until you know which sufficient
> > description is a description of redness, and which sufficient
> > description is a description of greenness, we won't know which is
> > which.
>
> We don't need to know anything as long as we are constantly learning.
> If you woke up up tomorrow and everything that was red looked green to
> you, at first you would be confused, but after a week, your would
> adapt and be functionally equivalent to now. You might even eventually
> even forget there ever was a switch.
>

Right, but this doesn't change the quality your brain uses to paint you
conscious knowledge of red things with.  And I predict once we know what it
is, in our brain, which your brain uses to represent that redness quality
will be objectively observable.  In other words, any of the above changes
that you are describing, (i.e. changing a brain to use greenness, instead
of redness to represent red things with) will be objectively observable.
In other words, we'll be able to objectively observe when your grenness has
changed to redness, and also whether I use your greenness to represent red
with, and so on.



> >>
> >> So if this "stuff" is glutamate, glycine, or whatever, and it exists
> >> in the brains of blind people, then why can't it represent redness (or
> >> greenness) information to them also?
> >
> > People  may be able to dream redness.  Or they may take some
> > psychedelics that enables them to experience redness, or surgeons
> > may stimulate a part of the brain, while doing brain surgery,
> > producing a redness experience,  Those rare cases are possible,  But
> > that isn't yet normal.  Once they discover which of all our
> > descriptions of stuff in the brain is a description of redness,
> > someone like Neuralink will be producing that redness quality in
> > blind people's brains all the time, with artificial eyes, and so
> > on.  But to date, normal blind people can't experience redness
> > quality.
>
> Sure they can, even if it just through a frequency of sound output by
> an Orcam MyEye. You learn what redness is by some manner of perception
> and how you perceive it does not matter. Synesthetics might even be
> able to taste or small redness.
>

Yes, of course, as I was saying, in specialc ases like synesthetics, drug
inducement, the correct neuralink stimulation, and such, where the brain is
induced into rendering glutamate knowledge into your consciousness, you
will experience redness.  But, again, you won't be able to experience
redness, when your eyes are close, without something additional like this.


> >>
> >>>
> >>>
> >>>>> This is true if that stuff is some kind of "Material" or
> "electromagnetic
> >>>>> field" "spiritual" or "functional" stuff, it remains a fact that your
> >>>>> knowledge, composed of that, has a redness quality.
> >>>>
> >>>> It seems you are quite open-minded when it comes to what qualifies as
> >>>> "stuff". If so, then why does your 3-robot-scenario single out
> >>>> information as not being stuff? If you wish to insist that something
> >>>> physical in the brain has the redness quality and conveys knowledge of
> >>>> redness, then why glutamate? Why not instead hypothesize that is the
> >>>> only thing that prima facie has the redness property to begin with
> >>>> i.e. red light? After all there are photoreceptors in the deep brain.
> >>>>
> >>>
> >>> Any physical property like redness, greenness, +5votes, holes in a
> punch
> >>> card... can represent (convey) an abstract 1.  There must be something
> >>> physical representing that one, but, again, you can't know what that is
> >>> unless you have a transducing dictionary telling you which is which.
> >>
> >> You may need something physical to represent the abstract 1, but that
> >> abstract 1 in turn represents some different physical thing.
> >
> > Only if you have a transducing dictionary that enables such, or you
> > think of it in that particular way.  Other than that, it's just a
> > set of physical facts, which can be interpreted as something else,
> > that is all.
>
> A transducing dictionary is not enough. Something has to read the
> dictionary, and all meaning is relative. If your wife is lost in the
> jungle, then her cries for help would mean something very different to
> you than they would to a hungry tiger. In communication, it takes
> meaning to understand meaning.  The reason you can understand abstract
> information is because you yourself are abstract information.
>

 All true, yes.  But you're getting away from the fact that a tesla
represents red knowledge with an abstract word like red, and a dictionary
is required to know what that means, while you represent red knowledge with
something that has a redness quality.  That quality is your definition of
the word red.  My prediction is that it is something like glutamate that
has the redness quality.  when we describe glutamate reacting in a
sysnapse, we are describing what you can directly apprehend as redness
conscious knowledge.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220522/d6a7db0c/attachment.htm>


More information about the extropy-chat mailing list