[ExI] How not to make a thought experiment
lacertilian at gmail.com
Sat Feb 20 17:33:26 UTC 2010
Will Steinberg <steinberg.will at gmail.com>:
> Given Swobe's previous quote on squiggling and squoggling, it seems that this GLUT is exactly what he considers the CRA to be.
I haven't been able to figure that out, myself. It's entirely likely
that he considers the specifics of the implementation irrelevant, and
so has pointedly refused to give such questions any thought.
I would not be surprised to learn that Gordon has never written a
computer program in his life.
Stathis Papaioannou <stathisp at gmail.com>:
> I don't see the CRA as necessarily equivalent to a Giant Look-Up Table
> (GLUT). It could instead run a program that speaks Chinese,
> functioning as an extremely slow digital computer. Having said that,
> if a GLUT is good enough to behave intelligently I don't see why it
> should not also be conscious.
The thing is, it would have to be a self-modifying GLUT. That's a
fundamentally different sort of thing, and there is nothing in the CRA
to indicate that the man is editing his rulebooks as he goes.
A static GLUT can not learn, obviously, so it can't be intelligent in
the same way that humans are. Maybe in some other, less interesting
way. You have to go shockingly far down the phylogenetic tree before
learning disappears entirely; it's a pretty basic trait of earthly
More information about the extropy-chat