[ExI] Is Artificial Life Conscious?

Adrian Tymes atymes at gmail.com
Mon May 9 23:08:29 UTC 2022


You could have said it much better.  I said that your argument doesn't make
logical sense.

Your response, in fact, did say it better.  However, you are asserting as
true things that Stathis is asserting as false.

On Mon, May 9, 2022 at 3:58 PM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> Exactly, I couldn't have set it better myself.
> Those aren't reasons why redness isn't substrate dependent.
> If you ask the system: "How do you do your sorting, one system must be
> able to say "bubble sort" and the other must be able to say: "quick sort"
> Just the same as if you asked: "What is redness like for you, one would
> say your redness, the other would say your greenness."
>
>
>
> On Mon, May 9, 2022 at 4:51 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> Those are not reasons why redness can't supervene.
>>
>> On Mon, May 9, 2022 at 3:42 PM Brent Allsop via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>> OK, let me explain in more detail.
>>> Redness can't supervene on a function, because you can substitute the
>>> function (say bubble sort) with some other function (quick sort)
>>> So redness can't supervene on a function, either because "you replace a
>>> part (or function) of the brain with a black box that affects the rest of
>>> the brain in the same way as the original, the subject must behave the same"
>>> So redness can't superven on any function.
>>>
>>>
>>>
>>>
>>> On Mon, May 9, 2022 at 4:03 PM Stathis Papaioannou via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>>
>>>>
>>>> On Tue, 10 May 2022 at 07:55, Brent Allsop via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>>> Hi Stathis,
>>>>>
>>>>> OK, let me try saying it this way.
>>>>> You use the neural substitution argument to "prove" redness cannot be
>>>>> substrate dependent.
>>>>> Then you conclude that redness "supervenes" on some function.
>>>>> The problem is, you can prove that redness can't "supervene" on any
>>>>> function, via the same neural substitution proof.
>>>>>
>>>>
>>>> It supervenes on any substrate that preserves the redness behaviour. In
>>>> other words, if you replace a part of the brain with a black box that
>>>> affects the rest of the brain in the same way as the original, the subject
>>>> must behave the same and must have the same qualia. It doesn’t matter
>>>> what’s in the black box.
>>>>
>>>>
>>>>
>>>>> On Thu, May 5, 2022 at 6:02 PM Stathis Papaioannou via extropy-chat <
>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, 6 May 2022 at 07:47, Brent Allsop via extropy-chat <
>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>
>>>>>>>
>>>>>>> Hi Stathis,
>>>>>>> On Thu, May 5, 2022 at 1:00 PM Stathis Papaioannou via extropy-chat <
>>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>>
>>>>>>>> On Fri, 6 May 2022 at 02:36, Brent Allsop via extropy-chat <
>>>>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>>>>
>>>>>>>>> On Wed, May 4, 2022 at 6:49 PM Stathis Papaioannou via
>>>>>>>>> extropy-chat <extropy-chat at lists.extropy.org> wrote:
>>>>>>>>>
>>>>>>>>>> I think colourness qualities are what the human behaviour
>>>>>>>>>> associated with distinguishing between colours, describing them, reacting
>>>>>>>>>> to them emotionally etc. is seen from inside the system. If you make a
>>>>>>>>>> physical change to the system and perfectly reproduce this behaviour, you
>>>>>>>>>> will also necessarily perfectly reproduce the colourness qualities.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> An abstract description of the behavior of redness can perfectly
>>>>>>>>> capture 100% of the behavior, one to one, isomorphically perfectly
>>>>>>>>> modeled.  Are you saying that since you abstractly reproduce 100% of the
>>>>>>>>> behavior, that you have duplicated the quale?
>>>>>>>>>
>>>>>>>>
>>>>>>>> Here is where I end up misquoting you because I don’t understand
>>>>>>>> what exactly you mean by “abstract description”.
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> [image: 3_robots_tiny.png]
>>>>>>>
>>>>>>> This is the best possible illustration of what I mean by abstract vs
>>>>>>> intrinsic physical qualities.
>>>>>>> The first two represent knowledge with two different intrinsic
>>>>>>> physical qualities, redness and greenness.
>>>>>>> "Red" is just an abstract word, composed of strings of ones and
>>>>>>> zeros.  You can't know what it means, by design, without a dictionary.
>>>>>>> The redness intrinsic quality your brain uses to represent knowledge
>>>>>>> of 'red' things, is your definition for the abstract word 'red'.
>>>>>>>
>>>>>>>
>>>>>>>> But the specific example I have used is that if you perfectly
>>>>>>>> reproduce the physical effect of glutamate on the rest of the brain using a
>>>>>>>> different substrate, and glutamate is involved in redness qualia, then you
>>>>>>>> necessarily also reproduce the redness qualia. This is because if it were
>>>>>>>> not so, it would be possible to grossly change the qualia without the
>>>>>>>> subject noticing any change, which is absurd.
>>>>>>>>
>>>>>>>
>>>>>>> I think the confusion comes in the different ways we think about
>>>>>>> this:
>>>>>>>
>>>>>>> "the physical effect of glutamate on the rest of the brain using a
>>>>>>> different substrate"
>>>>>>>
>>>>>>> Everything in your model seems to be based on this kind of "cause
>>>>>>> and effect" or "interpretations of interpretations".  I think of things in
>>>>>>> a different way.
>>>>>>> I would emagine you would say that the causal properties of
>>>>>>> glutamate or redness would result in someone saying: "That is red."
>>>>>>> However, to me, the redness quality, alone, isn't the cause of
>>>>>>> someone saying: "That is Red", as someone could lie, and say: "That is
>>>>>>> Green", proving the redness isn't the only cause of what the person is
>>>>>>> saying.
>>>>>>>
>>>>>>
>>>>>> The causal properties of the glutamate are basically the properties
>>>>>> that cause motion in other parts of the system. Consider a glutamate
>>>>>> molecule as a part of a clockwork mechanism. If you remove the glutamate
>>>>>> molecule, you will disrupt the movement of the entire clockwork mechanism.
>>>>>> But if you replace it with a different molecule that has similar physical
>>>>>> properties, the rest of the clockwork mechanism will continue functioning
>>>>>> the same. Not all of the physical properties are relevant, and they only
>>>>>> have to be replicated to within a certain tolerance.
>>>>>>
>>>>>> The computational system, and the way the knowledge is
>>>>>>> consciousnessly represented is different from simple cause and effect.
>>>>>>> The entire system is aware of all of the intrinsic qualities of each
>>>>>>> of the pixels on the surface of the strawberry (along with any reasoning
>>>>>>> for why it would lie or not)
>>>>>>> And it is because of this composite awareness, that is the cause of
>>>>>>> the system choosing to say: "that is red", or choose to lie in some way.
>>>>>>> It is the entire composit 'free will system' that is the initial
>>>>>>> cause of someone choosing to say something, not any single quality like the
>>>>>>> redness of a single pixel.
>>>>>>> For you, everything is just a chain of causes and effects, no
>>>>>>> composite awareness and no composit free will system involved.
>>>>>>>
>>>>>>
>>>>>> I am proposing that the awareness of the system be completely
>>>>>> ignored, and only the relevant physical properties be replicated. If this
>>>>>> is done, then whether you like it or not, the awareness of the system will
>>>>>> also be replicated. It’s impossible to do one without the other.
>>>>>>
>>>>>> If I recall correctly, you admit that the quality of your conscious
>>>>>>> knowledge is dependent on the particular quality of your redness, so qualia
>>>>>>> can be thought of as a substrate, on which the quality of your
>>>>>>> consciousness is dependent, right?  If you are only focusing on a different
>>>>>>> substrate being able to produce the same 'redness behavior' then all you
>>>>>>> are doing is making two contradictory assumptions.  If you take that
>>>>>>> assumption, then you can prove that nothing, even a redness function can't
>>>>>>> have redness, for the same reason.  There must be something that is
>>>>>>> redness, and the system must be able to know when redness changes to
>>>>>>> anything else.  All you are saying is that nothing can do that.
>>>>>>>
>>>>>>
>>>>>> I am saying that redness is not a substrate, but it supervenes on a
>>>>>> certain type of behaviour, regardless of the substrate of its
>>>>>> implementation. This allows the system to know when the redness changes to
>>>>>> something else, since the behaviour on which the redness supervenes would
>>>>>> change to a different behaviour on which different colour qualia supervene.
>>>>>>
>>>>>> That is why I constantly ask you what could be responsible for
>>>>>>> redness.  Because whatever you say that is, I could use your same argument
>>>>>>> and say it can't be that, either.
>>>>>>> If you could describe to me what redness could be, this would
>>>>>>> falsify my camp, and I would jump to the functionalist camp.  But that is
>>>>>>> impossible, because all your so-called proof is claiming, is that nothing
>>>>>>> can be redness.
>>>>>>>
>>>>>>> If it isn't glutamate that has the redness quality, what can?
>>>>>>> Nothing you say will work, because of your so-called proof.  Because when
>>>>>>> you have contradictory assumptions you can prove all claims to be both true
>>>>>>> and false, which has no utility.
>>>>>>>
>>>>>>
>>>>>> Glutamate doesn’t have the redness quality, but glutamate or
>>>>>> something that functions like glutamate is necessary to produce the redness
>>>>>> quality. We know this because it is what we observe: certain brain
>>>>>> structures are needed in order to have certain experiences. We know that it
>>>>>> can’t be substrate specific because then we could grossly change the qualia
>>>>>> without the subject noticing, which is absurd, meaning there is no
>>>>>> difference between having and not having qualia.
>>>>>>
>>>>>> Until you can provide some falsifiable example of what could be
>>>>>>> responsible for your redness quality, further conversation seems to be a
>>>>>>> waste.  Because my assertion is that given your assumptions NOTHING will
>>>>>>> work, and until you falsify that, with at least one example possibility,
>>>>>>> why go on with this contradictory assumption where qualitative
>>>>>>> consciousness, based on substrates like redness and greenness, simply isn't
>>>>>>> possible?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> extropy-chat mailing list
>>>>>>> extropy-chat at lists.extropy.org
>>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>>>>
>>>>>> --
>>>>>> Stathis Papaioannou
>>>>>> _______________________________________________
>>>>>> extropy-chat mailing list
>>>>>> extropy-chat at lists.extropy.org
>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>>>
>>>>> _______________________________________________
>>>>> extropy-chat mailing list
>>>>> extropy-chat at lists.extropy.org
>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>>
>>>> --
>>>> Stathis Papaioannou
>>>> _______________________________________________
>>>> extropy-chat mailing list
>>>> extropy-chat at lists.extropy.org
>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220509/99fdc9dc/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 3_robots_tiny.png
Type: image/png
Size: 26214 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220509/99fdc9dc/attachment.png>


More information about the extropy-chat mailing list