[extropy-chat] Proof that a paperclip maximizer cannot be a general intelligence
Marc Geddes
marc_geddes at yahoo.co.nz
Fri Aug 19 05:26:48 UTC 2005
>"Either I did not just carry out the goal of
>understanding this
>sentence, or I am a weevil"
>
>*poof* instant weevil?
>"If this sentence is true, then Santa Claus exists."
The reason these two examples are meaningless is that
there is a confusion between languages and
meta-languages. But for my example I tried not to
commit this elementary confusion.
Any way, you're right, I quickly realized that my
example doesn't prove a thing.
But let me throw the question open to everyone on the
extropy list:
Is there some clever question you could ask a
paper-clip maximizer (an AI with 'making paperclips'
as its highest goal) , which would , as it were,
'catch it out', and prove reductio ad absurdum that in
fact that the notion of a paperclip maximizer as a
general intelligence is absurd? As Justin says, what
I'm looking for is a statement of the following form:
If you
could devise a sentence which must be true or false,
but a paperclip
maximizer could not decide upon a value, AND that
sentence must be
decided upon for the understanding system to be
considered a general
intelligence, you might have something.
Can anyone think of such a sentence?
---
Please vist my website:
http://www.riemannai.org
Science, Sci-Fi and Philosophy
---
THE BRAIN is wider than the sky,
For, put them side by side,
The one the other will include
With ease, and you beside.
-Emily Dickinson
'The brain is wider than the sky'
http://www.bartleby.com/113/1126.html
Send instant messages to your online friends http://au.messenger.yahoo.com
More information about the extropy-chat
mailing list