[extropy-chat] AI design

paul.bridger paul.bridger at paradise.net.nz
Fri Jun 4 01:00:09 UTC 2004


 > I haven't heard any convincing explanation of why the AI will be
 > motivated to be such a bad neighbor

An AI doesn't have to be motivated to be a bad neighbour to destroy us. Can 
you imagine an intelligence that doesn't care one way or the other about 
humanity?
Now imagine that this intelligence wanted to be as powerful and as smart as 
possible. Maybe it would turn the Solar System into a vast computer, wiping 
out us in the process.

Zero Powers wrote:
>> From: Eugen Leitl <eugen at leitl.org>
  > I'm almost tempted to ask how we plan to insure Friendliness in an AI
> that (1) will have God-like intelligence, and (2) will be self-improving 
> and doing so at an exponentially accelerating rate.  Somehow this 
> rapidly self-improving, God-like super-intelligence will be able to 
> optimize itself (hardware *and* software), solve all the world's 
> problems in a single bound and propagate itself throughout the entire 
> universe, yet will *never* be able to alter those lines of its code that 
> compel it to be Friendly to humans?

Sure, the AI would be perfectly *able* to alter those lines of code. The only 
viable approach is to make the AI not *want* to change those lines of code.

If an AI has a single core goal which directs all its behaviour (including 
its self-modification behaviour), then it will not intentionally do something 
which contradicts that goal (such as changing it).

Anyway, that's my (fairly naive) thesis. I'm sure other people on the list 
will have more sophisticated arguments.

BTW, please stop mentioning the Matrix. Matrix philosophy and physics sucks 
arse. :)

Paul Bridger



More information about the extropy-chat mailing list