[ExI] Chalmers

John Clark johnkclark at gmail.com
Fri Dec 20 13:52:51 UTC 2019


On Thu, Dec 19, 2019 at 6:00 PM William Flynn Wallace via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> John, why do you require certainty?
>

I don't! My fundamental axiom is intelligent behavior implies
consciousness, like all axioms I can't prove it but nevertheless I need to
believe it because I could not function if I really thought I was the only
conscious entity in the universe.

> Why couldn't you just imagine a study where a stimulus is presented to
> someone and they give you a verbal description of it which aligns with
> yours.  This means that both of you are conscious
>

I think that is almost certainly true and that's good enough, but if you
are using that method to detect consciousness that means you are doing it
by observing behavior, in particular the intelligent type. So if you want
to understand consciousness better your best bet is to figure out a way to
make a AI smarter, you'll never reach total certainty but you'll get as
close to understanding how consciousness works as it's possible to get.


> > Well, I guess that means the AI is conscious.
>

Yes, probably.

> A lot of people won't be happy with that.
>

A lot of people are already unhappy with AI because they just don't like
the idea that something that is not wet and squishy could be conscious, not
even if it behaves super-intelligently. But I've got to say as a practical
matter it's not important if humans believe AIs are conscious or not, from
our point of view it's far more important that AIs believe humans are
conscious, because if a ultra-smart AI doesn't people are going to be very
VERY *VERY* unhappy.

 John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20191220/8a4cc02d/attachment-0001.htm>


More information about the extropy-chat mailing list