[ExI] The symbol grounding problem in strong AI

Stathis Papaioannou stathisp at gmail.com
Mon Jan 11 01:17:57 UTC 2010


2010/1/11 Gordon Swobe <gts_2000 at yahoo.com>:

>> No-one claims that the brain is a digital computer, but it
>> can be simulated by a digital computer.
>
> If you think simulations of brains on digital computers will have everything real brains have then you must think natural brains work like digital computers. But they don't.

Gordon, it's sensible to doubt that a digital computer simulating a
brain will have the consciousness that the brain has, since it isn't
an atom for atom copy of the brain. What I have done is assume that it
won't and see where it leads. It leads to the conclusion that any
aspect of your consciousness that is anatomically localised can be
selectively removed without your behaviour changing and without you
noticing (using the rest of your brain) that there has been any
change. This seems absurd, since at the very least, you would expect
to notice if you suddenly lost your vision or your ability to
understand language. So I am forced to conclude that the initial
premise, that the brain simulation was unconscious, was wrong. There
are only two other premises in this argument which could be disputed:
that brain activity is computable and that consciousness is the result
of brain activity.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list