[ExI] How could you ever support an AGI?
giovanni santost
santostasigio at yahoo.com
Wed Mar 5 06:51:40 UTC 2008
ok ....
then we are talking about the Frankenstein nightmare that all the anti-technology people evoke every single time a new technology is created...
since the time we discovered fire, we always imagined a punishment for our arrogance, for our dream to become Gods (think Prometeus)....so far not withstanding our self inflicted imaginary punishments nothing has happened (except some local sad but really insignificat disasters) to our species as a whole.....
I bet the pattern will continue with the creation of AGI....
sure we have to be careful but not paranoid....
there are more reasons to be optimistic than not....
Lee Corbin <lcorbin at rawbw.com> wrote: Giovanni writes
> the trend is there..... and you can see similar things happening in human
> society where there are different kinds of iindividual ntelligences,
> civilizations, laws and moral conducts and so on (sure the spectrum
> is restricted in comparison with the amazing possibilities opened by
> an AGI consciousness)
Yes, it sure is. An AGI will not by any means *necessarily* have any
altruism. We hope for our survival that either it does, or it adopts the
logic I have proposed for years
"Best to be nice to your creators so that those you
create will be nice to you... for the reason that
that those that *they* create will be nice to them...
ad infinitum...
And if it takes almost zero resources to "be nice",
why not? It's safer to go with this meme. A post-
Singularity AI could upload and run everyone on
Earth in the pleasantest possible environments within
one cubic meter, easily.
> but again you can come to a similar conclusion that in general intelligence
> (at the individual or civilization level) means higher altruism (that buddhists
> call very to the point here intelligent selfishness).
But the altruism at every point is explained by the particular evolutionary
history of the species in question. The AGI won't have an evolutionary
history---unless we succeed in giving it one or finding some other way
to make ti Friendly (pace the logic I expouse above)
> There are exception to this pattern, there a geniuses psychopaths....but
> their intelligence is very limited and specialized....they are usually not
> very successful in society and usually do not survive in the long run
> (or not very successful at least in transmitting their genes to future
> generations)....evolution do not favour such aberrations...
It hasn't so far. But as governments will now support *all* conceived
children, new opportunities open up for psychopaths.
> we can imagine for example that AGI would have to share information
> and data with other entities on the web and be able to manage resources
> in a cooperative way, the pace of evolution in this environment would be
> amazingly fast and AGI that are not apt to share information, work
> together with other intelligences for the common good and so on would
> not survive very long...
But the "AI hard-takeoff" that worries so many fine thinkers on the SL4
list and here considers the possibility that one AI makes a breakthrough,
and in hours or even minutes is vastly, vastly ahead of all the others,
and is the first to achieve total world domination.
Lee
> that could be a self-selective mechanism for AGIs (even if what I just
> explained is somehow simplicistic) that would emulate similar processes
> that made us prone to cooperate and created in us that "feeling", that
> "emotion" of altruism that is actually a very logical, intelligent and
> probably unavoidable response by any higher form of consciousness
> to the environmental challenges and pressures.
_______________________________________________
extropy-chat mailing list
extropy-chat at lists.extropy.org
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
---------------------------------
Never miss a thing. Make Yahoo your homepage.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20080304/00a1f43e/attachment.html>
More information about the extropy-chat
mailing list