[ExI] How could you ever support an AGI?

Lee Corbin lcorbin at rawbw.com
Thu Mar 6 04:27:46 UTC 2008


Jeff Davis writes

> In fact I conclude... that a human-created AI would likely see
> itself as the culmination of human intelligence and civilization:
> born of, upgraded from, modeled on, schooled in, and destined
> to pursue the furtherance of intelligence and civilization.

How can you "conclude" what a General Artificial Intelligence
(AGI) will think about humanity?  But the danger that Robert
Bradbury, who started this thread, sees is that once it's at 
human level intelligence, it will quickly go beyond, and
be utterly unpredictable. If it is a lot smarter than we are,
there is no telling what it might think.

   It could be singularly selfish.
   It could just go crazy and "tile the world with paperclips".
   It could be transcendentally idealistic and want to greatly
     further intelligence in the universe and, oh, wipe out the pesky
     insignificant bacteria (us) that it happened to evolve from
   It could (with luck) be programmed (somehow) or evolved
     (somehow) to respect our laws, private property, and so on.
   As soon as it's able to change its own code, it will be literally
     unpredictable.

> [Alex wrote]
>> Expecting It to adhere to these moral codes would be
>> akin to you or I adhering to the moral codes of Ants.
> 
> Too big a jump, at least for the first generation AI.

But the "first generation" may not last very long at all.  For as
soon as anything is as bright as we are, constant hardware
and software improvements will put it vastly beyond us.

You can't know that there will not be a "fast-takeoff".  And if
there is, it could all be over for us in an instant.

Lee




More information about the extropy-chat mailing list