[ExI] Meaningless Symbols.

Stathis Papaioannou stathisp at gmail.com
Mon Jan 11 03:10:24 UTC 2010


2010/1/11 Damien Broderick <thespike at satx.rr.com>:

> I have a sneaking suspicion that what is at stake is volitional initiative,
> conscious weighing of options, the experience of assessing and then acting.
> Yes, we know a lot of this experience is illusory, or at least misleading,
> because a large part of the process of "willing" is literally unconscious
> and precedes awareness, but still one might hope to have a machine that is
> aware of itself as a person, not just a tool that shuffles through canned
> responses--even if that can provide some simulation of a person in action.
> It might turn out that there's no difference, once such a complex machine is
> programmed right, but until then it seems to me fair to suppose that there
> could be. None of this concession will satisfy Gordon, I imagine.

If you make a machine that behaves like a human then it's likely that
the machine is at least differently conscious. However, if you make a
machine that behaves like a human by replicating the functional
structure of a human brain, then that machine would have the same
consciousness as the human. If it didn't, it would lead to an absurd
concept of consciousness as something that could be partly taken out
of someone's mind without them either changing their behaviour or
realising that anything unusual had happened.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list