[extropy-chat] AI design
Zero Powers
zero_powers at hotmail.com
Fri Jun 4 00:20:31 UTC 2004
>From: Eugen Leitl <eugen at leitl.org>
>
>On Thu, Jun 03, 2004 at 09:07:49AM -0700, Zero Powers wrote:
>
> > Hmmm. I still don't get it. Even if we are insignificant by comparison
>I
> > still don't see *why* an AI would "turn us into lunch."
>
>Because you're made from atoms, just as the landscape. You are a part of
>the
>environment, a raw resource, about to be strip-mined.
Yes, so I've been told. My only question is why? Why would the AI want to
"strip mine" me or turn me "into lunch" or thrust me into the "whirling
razor blades." These descriptions of what the all-powerful AI is going to
do to me (unless an exponentially weaker intelligence like Eli's can trick
it into being friendly) all sound pretty scary. But I guess the reason I
don't feel scared is because so far I haven't heard any convincing
explanation of why the AI will be motivated to be such a bad neighbor. I've
heard:
1. You're so insignificant the AI will rip you atom from atom before it even
realizes you cared;
2. The AI will be programmed to statically seek ultimate "utility" which
means reconstituting your brain cells to a state of euphoria, and leaving
you stuck there; and
3. Just like in _The Matrix_, the AI will use (a) your brain for backup
storage and/or (b) your atoms for energy.
Those arguments seem laughable to me. I could go on for a few more
paragraphs explaining what I see as the ridiculousness of those arguments
but (1) the explanations should be self-evident and (2) those who don't see
the counter-arguments inherent in the above reasons would probably not see
them after my explanations. So, I guess where I'm at is: does anyone have a
reason (other than on the order of the above 3) that I should be afraid? If
so, I'd be interested to hear. If not, I guess I'll politely drop out of
this thread now.
I'm almost tempted to ask how we plan to insure Friendliness in an AI that
(1) will have God-like intelligence, and (2) will be self-improving and
doing so at an exponentially accelerating rate. Somehow this rapidly
self-improving, God-like super-intelligence will be able to optimize itself
(hardware *and* software), solve all the world's problems in a single bound
and propagate itself throughout the entire universe, yet will *never* be
able to alter those lines of its code that compel it to be Friendly to
humans? No, don't even bother to respond. Believe me, I won't understand.
Take care
Zero
_________________________________________________________________
Getting married? Find great tips, tools and the latest trends at MSN Life
Events. http://lifeevents.msn.com/category.aspx?cid=married
More information about the extropy-chat
mailing list