[ExI] Digital Consciousness
Stathis Papaioannou
stathisp at gmail.com
Sat May 4 11:54:03 UTC 2013
On Sat, May 4, 2013 at 2:37 PM, Gordon <gts_2000 at yahoo.com> wrote:
> Stathis Papaioannou <stathisp at gmail.com> wrote:
>
>> But it can be shown that if it is possible to replicate the behaviour of
>> the
>> brain then it is also possible (in fact, it follows necessarily) to
>> replicate the consciousness.
>
> If you think consciousness follows necessarily from brain-like behavior then
> I suppose you must think some computers are already at least semi-conscious.
> I was joking the other day about how I would like sometimes to shoot my
> stand-alone chess computer, as it seems there is a cunning person inside it
> and he sometimes makes me angry. It certainly *behaves* as if it is
> conscious of me, of itself, and the game. Do you think it is actually dimly
> aware of its own existence?
>
> If not, at what point in the development conscious-like behavior do we
> decide suddenly to grant that an AI has real consciousness? How is it not
> arbitrary?
You have the same problem with biological systems. Do you think a dog
is conscious? A cockroach? A bacterium? A water molecule?
>>And who assigns the meaning to our own physically based brains?
>
> I don't understand your question, but this in an important point that I'm
> trying to make here. Who assigns the meaning of what? The brain? As a word,
> I think "brain" has meaning and that we assign it that meaning. Does that
> answer your question?
>
> As for my point, syntax and computational states are not actually intrinsic
> to the physics of the brain. It would seem that people who follow the
> computational theory of mind are merely assigning computational states to
> the physics in a manner not unlike how we assign meanings to words.
The point I was trying to make is that you seem to have a problem
deriving semantics from syntax, but that is no less a problem for a
brain than a computer.
--
Stathis Papaioannou
More information about the extropy-chat
mailing list