[ExI] How could you ever support an AGI?

Lee Corbin lcorbin at rawbw.com
Wed Mar 5 05:53:54 UTC 2008


Giovanni writes

> the trend is there..... and you can see similar things happening in human
> society where there are different kinds of iindividual ntelligences,
> civilizations, laws and moral conducts and so on (sure the spectrum
> is restricted in comparison with the amazing possibilities opened by
> an AGI consciousness)  

Yes, it sure is.  An AGI will not by any means *necessarily* have any
altruism. We hope for our survival that either it does, or it adopts the
logic I have proposed for years

      "Best to be nice to your creators so that those you
      create will be nice to you... for the reason that
      that those that *they* create will be nice to them...
      ad infinitum...

      And if it takes almost zero resources to "be nice",
      why not?  It's safer to go with this meme.  A post-
      Singularity AI could upload and run everyone on
      Earth in the pleasantest possible environments within
      one cubic meter, easily.

> but again you can come to a similar conclusion that in general intelligence
> (at the individual or civilization level) means higher altruism (that buddhists
> call very to the point here intelligent selfishness).

But the altruism at every point is explained by the particular evolutionary
history of the species in question.  The AGI won't have an evolutionary
history---unless we succeed in giving it one or finding some other way
to make ti Friendly (pace the logic I expouse above)

> There are exception to this pattern, there a geniuses psychopaths....but
> their intelligence is very limited and specialized....they are usually not
> very successful in society and usually do not survive in the long run
> (or not very successful at least in transmitting their genes to future
> generations)....evolution do not favour such aberrations...

It hasn't so far.  But as governments will now support *all* conceived
children, new opportunities open up for psychopaths.

> we can imagine for example that AGI would have to share information
> and data with other entities on the web and be able to manage resources
> in a cooperative way, the pace of evolution in this environment would be
> amazingly fast and AGI that are not apt to share information, work
> together with other intelligences for the common good and so on would
> not survive very long...

But the "AI hard-takeoff" that worries so many fine thinkers on the SL4
list and here considers the possibility that one AI makes a breakthrough,
and in hours or even minutes is vastly, vastly ahead of all the others,
and is the first to achieve total world domination.

Lee

> that could be a self-selective mechanism for AGIs (even if what I just
> explained is somehow simplicistic) that would emulate similar processes
> that made us prone to cooperate and created in us that "feeling", that
> "emotion" of altruism that is actually a very logical, intelligent and
> probably unavoidable response by any higher form of consciousness
> to the environmental challenges and pressures.




More information about the extropy-chat mailing list