[ExI] The symbol grounding problem in strong AI

Eugen Leitl eugen at leitl.org
Tue Dec 15 09:45:49 UTC 2009


On Tue, Dec 15, 2009 at 02:34:39PM +1030, Emlyn wrote:

> This is the real answer to the "consciousness" problem, imo. You will
> know if AI is conscious because you'll just ask it if it is, and

I don't know what "conscious" even means, but you know AI has achieved
full human equivalence across the board once everyone is out of job.

> you'll be able to observe its behaviour and see if is influenced by
> its own sense of consciousness. The problem of whether it is telling
> the truth is identical to the problem of whether people lie about this
> now; you can't know and it doesn't matter.
> 
> Most likely, an AI which is not an emulation of evolved biology will

I would not be holding my breath for that one.

> experience something entirely unlike what we experience. It should be
> pretty damned interesting, and illuminating for humanity, to interact
> with such alien critters!

The first generations are human demand-driven, and hence perfectly
boring. The other kind is incomprehensible and/or lethal, so I'm
not sure I would want to talk to them unless I'm them. Then you
don't want to talk to me.

-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE



More information about the extropy-chat mailing list