[extropy-chat] Eugen Leitl on AI design

Adrian Tymes wingcat at pacbell.net
Wed Jun 2 21:45:52 UTC 2004


--- Eliezer Yudkowsky <sentience at pobox.com> wrote:
>   If the paperclip maximizer is a superintelligence,
> it will correctly 
> expect to win.

Modulo initial resources.  Again, even a
superintelligence can't break the laws of physics,
and would need some time to gather strength.

> You must be joking.  A human brain beat a
> superintelligence?

Beat?  No.  Augment with thought patterns the
superintelligence has not previously used?  Possibly.

> the 
> paperclip maximizer would read out only that
> information which it needed 
> during Bradbury's disintegration,

With a large bias towards retaining the possibilities.
Anything that isn't proven useless can be discarded
later...but one can't prove a negative.

> use the
> information to produce paperclips 
> I can't imagine how, and discard the information
> afterward.

Only if it intends to discard itself afterwards as
well, turning itself into paperclips.  Which it won't,
if it wants the paperclips to persist: there are all
kinds of natural hazards which, if left unchecked,
would diminish the number.  Production is production,
whether used to boost initial stock or replenish
depleted supply.

> For the 
> evolutionary biologists go to similarly great
> lengths to hammer out those 
> warm and fuzzy hopes with which people often
> approach natural selection.

And yet, if one knows where to look, one can find warm
and fuzzies inherent in the path that natural
selection leads to.  It does require the realization
that not all one cares about will necessarily be saved
or matter on any given path...but if one examines why
the things that are important to oneself are
important, one might find ways in which they do (or
can be made to) matter to others.



More information about the extropy-chat mailing list