[ExI] How not to make a thought experiment
Stathis Papaioannou
stathisp at gmail.com
Sat Feb 20 05:25:16 UTC 2010
2010/2/20 Will Steinberg <steinberg.will at gmail.com>:
> What has consistently boggled my mind is the fact that THE CRA, EVEN IN
> THEORY, WILL NOT EVEN MIMIC CONSCIOUSNESS.
> Upon closer examination, the idea of a "rulebook for responses" is bogus and
> impossible. Consciousness necessitates NON-BIJECTIVE, CHANGING RESPONSES,
> ABSOLUTELY NOT NOT NOT SQUIGGLE FOR SQUAGGLE, or any of this, if I may pull
> a Clark, BULLSHIT.
> The analogy is wrong because it assumes an untruth and hides it behind the
> neat idea of this rulebook. Please stop using it to back up anything. All
> it proves is that a super-ELIZA is not conscious. We know this. The
> rulebook machine is not conscious. Humans are not rulebook machines. No
> comparing.
I don't see the CRA as necessarily equivalent to a Giant Look-Up Table
(GLUT). It could instead run a program that speaks Chinese,
functioning as an extremely slow digital computer. Having said that,
if a GLUT is good enough to behave intelligently I don't see why it
should not also be conscious.
--
Stathis Papaioannou
More information about the extropy-chat
mailing list