[ExI] People often think their chatbot is alive

Adrian Tymes atymes at gmail.com
Sun Jul 17 04:51:11 UTC 2022


On Sat, Jul 16, 2022 at 8:14 PM Brent Allsop <brent.allsop at gmail.com> wrote:

> On Sat, Jul 16, 2022 at 3:08 PM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> On Sat, Jul 16, 2022 at 1:34 PM Brent Allsop via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> OK, let me ask you this.  Are you interested in finding out the
>>> colorness qualities of anything in physics?
>>>
>>
>> Physics, or biophysics?  If colorness is a quality of perception, then it
>> isn't just about physics divorced from the biology of the observer.
>>
>
> Are you saying redness is not the final result of perception of red things?
>

I would normally have trouble believing that such a non sequitur was made
in good faith.  In this case I'm willing to believe it was.

The literal answer is: no, that is not what I was saying there.  (Though it
can be a true statement.  If the perception of red things causes redness
which then causes some action to be taken, then by definition, redness is
not the final result since some action was taken as a consequence of the
redness.)


> And that your perception system doesn't render that surface of the 3D
> strawberry, into your consciousness, with whatever it is that has a redness
> quality?
>

This is closer, but still not what I was saying.  But note that even you
call it "your perception system".  This is by definition more complex than
simple physics.

And of course, again, once we discover which of all our descriptions of
>>> physics in the brain is a description of redness, (it will falsify all the
>>> crap in the gap theories like substance dualism and functionalism) and
>>> result in a clear scientific consensus about not only what
>>> consciousness is, but an understanding of what consciousness is like.
>>> Along with that will be a near unanimous consensus that abstract systems,
>>> like the one on the right in the image, would not be considered to be
>>> conscious by anyone, with any reasonable intelligence.
>>>
>>
>> That you are driving toward that conclusion tells me that you are
>> probably incorrect.  It seems quite possible that such an abstract system
>> could be conscious in every meaningful way.
>>
>
> Those are falsifiable claims.
>

Actually, they might not be.  They are predictions of a future state.  So
long as those states have yet to come to pass, and have not been rendered
impossible, the claims are not falsifiable.  If objective measurements of
consciousness remain as impossible as they are today, then it seems
unlikely that either falsifiable condition will come to pass.


> And this is how they will be falsified, I predict.
>
> 10 years from now.  (5, if we got 10,000 signatories on the RQT
> <https://canonizer.com/topic/88-Theories-of-Consciousness/6-Representational-Qualia>
> camp this year, or one of its sub camps, even the crap in the gap ones like
> functionalism or substance dualism, which seperate redness from physical
> reality), then someone like Elon Musk finally gets the message (because of
> all the signatures)
>

Many petitions like this have been attempted.  Even among those that got at
least 10,000 signatures, how many resulted in significant action (as
opposed to mere letters and speeches with little more content than to
simply acknowledge the petition) that would not have happened without the
petitions?  My data is anecdotal, but the examples I am aware of have had a
0% success rate.


> and finally realizes how to observe the brain in a non qualia blind way,
> and using the tools of neuralink, hacking the brain, demonstrate that
> NOBODY can experience redness without glutamate. (or pick your most likely
> whatever could be redness).
>

>From what I know of neural architecture, this conclusion appears to be
false.  Memory architecture must be internally consistent, but there is
nothing requiring it to be the same from person to person.

Even if it were true, it would only be true of human brains, or perhaps of
biological ones.  It would say nothing about the ability of AI running on
silicon and metal to potentially experience redness.


>  If not, how could your claims be falsified?
>

 My claim, to be clear, is not that consciousness is
non-substrate-dependent, but that the evidence is consistent with
consciousness being non-substrate-dependent.

Consciousness is, from our current tool set, like God: people can claim
that various things are or are not conscious and this is not a falsifiable
claim.  (Again I bring up the historical example, where classes of people
were once - to justify their slavery - considered non-conscious, yet most
people these days consider those same classes of people to be conscious.)
It is possible that this will always be so.

What is in evidence, however, is how people react when they are treated as
conscious vs. when they are treated as non-conscious.  The former results
in people less likely to be hostile (as they are more likely to see others
as willing to negotiate, and to converse with them in good faith), more
likely to take care of themselves, and more likely to contribute positively
toward society.  The former also act like they are conscious more often,
simply by being presented with the assumption that they are conscious.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220716/dbfdccc7/attachment-0001.htm>


More information about the extropy-chat mailing list