[ExI] Possible seat of consciousness found

John Clark johnkclark at gmail.com
Sat Feb 15 17:38:29 UTC 2020


On Sat, Feb 15, 2020 at 11:30 AM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

*I first realized it was important to distinguish between reality and
> knowledge of reality in an AI class as an undergraduate.  We were trying to
> program a vision system for a robot that could manipulate blocks on a
> table. They were instructing us how to build 3D models of the blocks from
> 2D camera data.  I was thinking this had to be the wrong way to do things,
> since I was naively thinking we didn't need to do all that extra work since
> we were just "directly aware of the blocks on the table.'*
>

So the "extra work" proved not to be extra at all, not if the robot was to
behave in the way you wanted it to.

*> As Representational Qualia theory defines: "Consiosness as
> computationally bound elemental subjective qualities like redness and
> greenness."*
>

I still don't know what "computationally bound" means, but I do know that
red and green do not have "elemental subjective qualities" anymore than
bigness or smallness does.

> *John, this predicts there are two types of seeing.  The kind that is
> done by robots, our subconscious, and blind sight, where there is no
> conscios computational binding, and the conscious kind where there is
> binding.  *
>

If these two types of seeing end up producing the same behavior then
science can never distinguish between them, and Darwinian Evolution could
never have created the type that produces not only intelligence but
consciousness too; and yet here I am a conscious being. On the other hand
if the two types of seeing end up producing different behaviors then any
intelligent activity that would convince you that one of your fellow humans
was conscious, and not sleeping or under anesthesia or dead, should if
you're being logical convince you that a robot who behaved in the same
intelligent way was just as conscious as the human.

And as I keep emphasising, as a practical matter it's not really important
if humans think an AI is conscious, but it is important that the AI think
humans are conscious. If the AI thinks we're conscious like it is it might
feel some empathy toward us, but if it thinks we're just primitive meat
machines then the human race is in deep trouble.

John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20200215/8de644e2/attachment.htm>


More information about the extropy-chat mailing list