[ExI] The symbol grounding problem in strong AI

Emlyn emlynoregan at gmail.com
Tue Dec 15 04:04:39 UTC 2009


2009/12/14 BillK <pharos at gmail.com>:
> If a strong AI has human sense equivalents, like vision, hearing,
> taste, touch, etc. plus symbol manipulation, all to such a level that
> it can operate successfully in the world, then you have a processor
> which could pass for human.
>
> You can then try asking it if it is conscious and see what answer you get......
>
>
> BillK

This is the real answer to the "consciousness" problem, imo. You will
know if AI is conscious because you'll just ask it if it is, and
you'll be able to observe its behaviour and see if is influenced by
its own sense of consciousness. The problem of whether it is telling
the truth is identical to the problem of whether people lie about this
now; you can't know and it doesn't matter.

Most likely, an AI which is not an emulation of evolved biology will
experience something entirely unlike what we experience. It should be
pretty damned interesting, and illuminating for humanity, to interact
with such alien critters!

-- 
Emlyn

http://emlyntech.wordpress.com - coding related
http://point7.wordpress.com - ranting
http://emlynoregan.com - main site



More information about the extropy-chat mailing list