[ExI] Fwd: Fwd: Is Artificial Life Conscious?

Jason Resch jasonresch at gmail.com
Mon Apr 25 23:51:49 UTC 2022


On Mon, Apr 25, 2022 at 5:23 PM Brent Allsop <brent.allsop at gmail.com> wrote:

>
> Hi Jason,
> On Mon, Apr 25, 2022 at 2:18 PM Jason Resch <jasonresch at gmail.com> wrote:
>
>> Hi Brent,
>>
>> I appreciate your quick response and for getting to the heart of the
>> issue. My replies are in-line below:
>>
> Likewise.
>
> On Mon, Apr 25, 2022 at 1:43 PM Brent Allsop <brent.allsop at gmail.com>
>> wrote:
>>
>>>
>>> Hi Jason,
>>> Yes, Stathis and I have gone over these same arguments, in a gazillion
>>> different ways, for years, still unable to convince the other.  I agree
>>> with most everything you say, but it is all entirely missing the point.
>>>
>>> I think you get to the core of the issue with your:
>>> "First, I would like you to deeply consider for a moment the question
>>> 'What is matter?'"
>>>
>>> I am curious what your intuition says on this? Do you think that there
>> are intrinsic properties of matter (beyond its third-person observable
>> behavior) which is somehow necessary for consciousness or quale such as red?
>>
>
> There seems to be a physical strawberry out there, which is red.  Our
> intuition about the quality of that physical. thing is right,  just for the
> wrong stuff.  It is your physical knowledge of the strawberry that has the
> redness quality.
>

What do you think is the embodiment of this physical knowledge? I don't
think it is in the strawberry, for you can have a dream about a red
strawberry without there being any strawberry. Or a TV can project photons
to your retina without there being a strawberry, or a neurosurgeon can
apply electrical stimulation to parts of your brain to make you see a red
strawberry without there being one. So at what point and where does this
physical knowledge come into play? Are you considering that it exists in
chemicals within the brain itself?


>
>
>>  The issue is with one of these assumptions:
>>
>>> "1. Given the Church-Turing Thesis, any finitely describable process
>>> can be perfectly replicated by an appropriately programmed Turing Machine
>>> "
>>>
>>> The isus is that any description of redness (our claim that something is
>>> redness) tells you nothing of the nature of redness, without a dictionary
>>> pointing to an example of redness.
>>>
>>
>> Yes this is the "symbol grounding problem". All communication, of
>> anything (even so-called objective properties like mass, distance, time
>> durations, etc.) require ostensive (pointing to) definitions. Since no two
>> minds can ever share that common reference frame, and point out to the same
>> quale, ostensive definitions of these quale, and hence meaningful
>> communication concerning them, is impossible (since there can never be a
>> verifiable common foundation).
>>
>
> It's only a "problem" for functionalists.  For Materialists it is just a
> physical fact that something in the brain has that quality, waiting for us
> to discover it.
> Say we discover it is glutamate, and that no matter how hard a
> functionalist tries, they can't reproduce a redness experience, without
> glutamate, as Materialism predicts.
>
What does that say about your non falsifiable proof?
>

A functionally equivalent computation of a brain's neural network (when the
simulating computer does not contain glutamate) will nonetheless result in
that person reporting that they see red, since their functions were
replicated, all outwardly observable behaviors will, by definition, be
identical, so how can it ever be discovered that the redness quality is no
longer there? Even if it were absent, and they could consciously notice
this (though even this I think is getting into dubious inconsistency
territories) they could exhibit no outward signs that they were
experiencing things any differently. You need to be an epiphenominalist to
even accept the plausibility of this scenario (where one does not see red,
but reports that they do and that nothing has changed).


>
>
>>
>>
>>> This is true for the same reason you can't communicate to a blind person
>>> what redness is like, no matter how many words you use.
>>>
>>> Stathis always makes this same claim:
>>>
>>> "It is true that functionalism cannot be falsified. But not being
>>> falsifiable is a property of every true theory."
>>>
>>> no matter how many times I point out that if that is true, no matter
>>> what you say redness is, it can't be that, either, because you can use the
>>> same zombie or neural substitution argument and claim it can't be that
>>> either.
>>>
>>
>> I don't follow this point, could you elaborate?
>>
>>
>>> All you prove is that qualia aren't possible.
>>>
>>
>> I do not follow how this conclusions was reached.
>>
>
> Yea, I possibly just skipped past a few complex years of discussion with
> Stathis.  Basically, no matter what you say redness is (even if it results
> from some function), you can "prove" with the neural substitution argument
> <https://canonizer.com/topic/79-Neural-Substitn-Argument/1-Agreement>
> that it can't be that, either.
>

I am still not seeing how you are arriving at this conclusion. A neural
substitution preserves the abstract functional relations and properties, so
substituting one type of neuron for another doesn't change the function
that is implemented.


> Your zombie arguments seem the same, to me.  It doesn't prove that redness
> must be "functional" it proves there can be no redness of any kind.  Let me
> know your zombie argument doesn't have the same problem.
>

A functionalist would say redness exists as a property inherent to
particular ways of processing information, as implemented by certain
algorithms or functions. How does this prove there can be no redness of any
kind? I think there is something one, or both of us, may be missing here
as we seem to  be talking past each other on this point without any
communication or understanding occurring.


>
> Again, it's all about the assumptions you make.  Everyone assumes the
> simulation will succeed.  Materialists simply predict it will fail, and
> that whatever it is that has the redness quality, when you get to the fist
> pixel of redness, nothing but glutamate will enable you to produce an
> experience with a redness quality.  The substitution will fail.
>

How do you envision this failure manifesting? Do you agree with Searle when
he says:

“as the silicon is progressively implanted into your dwindling brain, you
find that the area of your
conscious experience is shrinking, but that this shows no effect on your
external behavior. You
find, to your total amazement, that you are indeed losing control of your
external behavior. You
find, for example, that when the doctors test your vision, you hear them
say, "We are holding up a
red object in front of you; please tell us what you see." You want to cry
out, "I can't see anything.
I'm going totally blind." But you hear your voice saying in a way that is
completely out of your
control, "I see a red object in front of me."”


Or something like that?



>
>    And since we know, absolutely, It is a physical fact that I can
>> experience redness,
>>
>> What does "physical" add to the above sentence? To me it seems redundant
>> and only adds to the confusion (as we still haven't settled what is meant
>> by physics or matter).
>>
>
> Yea 'physical' is probably redundant.
>
>>
>>
>>> this just proves your assumptions (about the nature of matter) are
>>> incorrect.
>>>
>>
>> I don't see why you think the assumption of functionalism leads to a
>> denial of qualia/consciousness.
>>
> Again, it is the neural substitution argument, which makes people think
> things like redness are functional.  But the neural substitution problems
> proves nothing, including some function, can have a redness quality.
> And my understanding is the main reason people think they are forced to
> accept functionalism (despite all the 'hard' problems that go along with
> it) is because of neural substitution and zombie arguments.
>

Did you see an error in my 6 step proof that if zombies are impossible,
functional equivalence must preserve consciousness? If so it wasn't clear
to me which point or assumption you believe the error exists in.

What do you see as the "hard problems" of functionalism?



>
>>
>>>   To say nothing about all the other so-called 'hard problems' that
>>> emerge with that set of assumptions.
>>>
>>> We can abstractly describe and predict how matter "whatever it is" will
>>> behave.  But when it comes to intrinsic colorness qualities or qualia, like
>>> redness and greenness, you've got to point to some physical example of
>>> something that has that redness quality.  And without that, there is no
>>> possible way to define the word "redness", let alone experience redness.
>>>
>>
>> A shared physical realm is necessary to ostensively define properties
>> like mass, distance, and time durations. Two beings, kept apart in two
>> different universes but allowed to communicate bit strings back and forth
>> could never reach any agreement on how long a "meter" is.
>>
>> This is the situation we are in with qualia. Two minds are in a sense,
>> like two partially isolated simulated universes, with an inability to ever
>> share the meaning of what they mean when they refer to their red
>> experiences, short of an Avatar-like neural link to temporarily bridge
>> their two independent and isolated mental realities.
>>
>
> I thought we already went over this.  Brain Hemispheres and conjoined
> twins
> <https://www.cbc.ca/cbcdocspov/features/the-hogan-twins-share-a-brain-and-see-out-of-each-others-eyes>
> prove what you think cannot be done can be done.
> If a brain hemisphere isn't an island, why would a brain be so
> constrained.  it's kind of like saying we will never fly, while watching
> birds fly.  It is only a matter of time before we can do all of the
> following engineereing in an artificial way.
>
> 1. weak form of effing the ineffable.
> 2. Stronger form of effing the ineffable.
> 4. Strongest form of effing the ineffable.
>
> For a more detailed description of these, see this quora answer
> <https://www.quora.com/How-can-we-prove-consciousness-in-the-universe/answer/Brent-Allsop-1>
> .
>

I am not sure what you think I am saying cannot be done. I believe minds
can be merged with sufficient technology. My only point is that after a
separation, one can no longer be certain that their memories of qualia
while joined have not somehow changed after the split. For what it's worth,
we can't be certain the red we experience today wasn't experienced as green
yesterday. Although I am of the opinion that for such a change to occur
would require an in-principal third-person detectable reorganization of the
processing done by someone's brain between those two days.


>
> Take the 16th color of the knowledge of that shrimp, which no human has
> ever experienced, which you mentioned.
> How are you going to reproduce that in your brain, so you can both know
> what it is like
>

It seems mammalian brains are sufficiently flexible to learn to interpret
and process new sensory input after a few weeks of time to adjust to the
new signal, as found in this experiment:
https://www.sciencedaily.com/releases/2009/09/090916133521.htm


> and then can use it to represent an additional wavelength of sensed light?
> You just need to take whatever it is, and computationally bind it into
> your consciousness.  Nothing hard about that.
> Claiming that could be duplicated simply by programming some function
> called 16th colorness quality doesn't even pass the laugh test does it?
>

When one is talking about functions that within our own brains,
involve billions of neurons and hundreds of trillions of synapses, it is
hard to say what the outcome may be. Though I doubt that such a function
implementing color vision could be implemented simply. This was one of my
motivations for writing the artificial life software, to try to ascertain
what are the bare minimum computations/processes necessary for the barest
levels of sentience. Though these bots have just 16 artificial neurons,
they learn to:

   - Spin Searching For Food
   - Stop to Eat Food and pass over Poison
   - Follow Food on the Move
   - Travel at high speed in a straight line
   - Slow down when food encountered
   - Travel with antenna to side stopping to eat
   - Flinch antenna on contact with poison
   - Slightly turn to sweep larger area
   - Turning and spreading out antenna while eating
   - Try to speed through poison
   - Stay still hiding from poison
   - Move out of the way when poison is near
   - Flinch, wiggle, and run on contact with poison

You can observe all these behaviors evolve in real time over just 20 some
minutes here:
https://www.youtube.com/watch?v=InBsqlWQTts&list=PLq_mdJjNRPT11IF4NFyLcIWJ1C0Z3hTAX&index=1


>
> We may not know what matter is, but we know, absolutely, that something
> has a redness quality.
>

Indeed. I agree. I was just reading this article today which makes a
similar point: https://archive.ph/RVY0F (
https://www.nytimes.com/2016/05/16/opinion/consciousness-isnt-a-mystery-its-matter.html
)


> We just don't yet know what.  That's the only problem.
>
>
 I like to think of it like trying to figure out how a word processor
works, when we only have two views to it: what we see on the screen, with
the buttons, and cursor, and menus, and the compute hardware itself, with
its billions of transistors tossing electrons back and forth. From either
vantage point, it is an utter and incomprehensible mystery how one view has
anything to do with the other view. It's only in the intermediate view, the
software modules, the libraries, the functional description, the high level
programming language specifying everything, that our minds have any hope of
understanding things.

Today we see the conscious experience, and we see the neurons under a
microscope. What we're missing is the intermediate high-level descriptions
of the types of processing done by higher level collections of neurons, and
brain regions, etc. I think these steps are what is necessary if we are to
make any headway into understanding what red is. I tend to doubt lower
level neurotransmitters have any important role, for though red seems
simple, we have no reason to assume it must be simple.  Nore more than we
should assume the spell-checker is simple, because it is represented with a
single button in the word processor UI.

"Red the colour of blood
the symbol of life
Red the colour of danger
the symbol of death


Red the colour of roses
the symbol of beauty
Red the colour of lovers
the symbol of unity


Red the colour of tomato
the symbol of good health
Red the colour of hot fire
the symbol of burning desire”
-- Oluseyi Oluseun


If you saw "red" the way I see "blue", would the above poem still make as
much sense? If not this suggests the experience of red is not a "raw
perception" but contains many deep associations and connections to other
aspects of our mind. Even pain, when it feels so singular, can be
decomposed via brain surgery. Take this example:

Paul Brand, a surgeon and author on the subject of pain recounted the case
of a woman
who had suffered with a severe and chronic pain for more than a decade: She
agreed to a surgery
that would separate the neural pathways between her frontal lobes and the
rest of her brain. By all
accounts the surgery was a success. Brand visited the woman a year later,
and inquired about her
pain. She said, “Oh, yes, it’s still there. I just don't worry about it
anymore.” While smiling she
added, “In fact, it's still agonizing. But I don't mind.”


Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220425/40f4716e/attachment-0001.htm>


More information about the extropy-chat mailing list