[ExI] How could you ever support an AGI?

giovanni santost santostasigio at yahoo.com
Wed Mar 5 05:22:14 UTC 2008


not true also that tigers are not altruistic (what about mother tigers adopting pigs recently in the news)... in general animals can express altruism of some kind...and in particular the general trend is more intelligent the animal more altruistic it is ....
ants are very social but they cannot extend their social rules to other beings not belonging to their species...
a dog can do that in fact it has no problem to extend its social rules to human beings and consider us one of the pack....
apes can do also do that and even express higher feelings, share language and comunication with us....
the trend is there.....
and you can see similar things happening in human society where there are different kinds of iindividual ntelligences, civilizations, laws and moral conducts and so on (sure the spectrum is restricted in comparison with the amazing possibilities opened by an AGI consciousness)  
but again you can come to a similar conclusion that in general intelligence (at the individual or civilization level) means higher altruism (that buddhists call very to the point here intelligent selfishness).
There are exception to this pattern, there a geniuses psychopaths....but their intelligence is very limited and specialized....they are usually not very successful in society and usually do not survive in the long run (or not very successful at least in transmitting their genes to future generations)....evolution do not favour such aberrations...
we can imagine for example that AGI would have to share information and data with other entities on the web and be able to manage resources in a cooperative way, the pace of evolution in this environment would be amazingly fast and AGI that are not apt to share information, work together with other intelligences for the common good and so on would not survive very long...that could be a self-selective mechanism for AGIs (even if what I just explained is somehow simplicistic) that would emulate similar processes that made us prone to cooperate and created in us that "feeling", that "emotion" of altruism that is actually a very logical, intelligent and probably unavoidable response by any higher form of consciousness to the environmental challenges and pressures.



Lee Corbin <lcorbin at rawbw.com> wrote: I am afraid that Alex Blainey's post, below, is the first one except for
John Clark's original (Robert Bradbury's original)  that truly understands
the danger posed by AGI.

> I can't help but notice that many of the posts have started out with
> logic and concluded with quazi-anthropomorphic, straw man arguments.
> I understand that an AGI will or should be based upon 'human intelligence,'
> however the end result will be completely Alien to us. So much so that
> our interpretation of intelligence wouldn't really fit. 

Quite right. Have the other posters studied the "fast take-off" scenarios?
Moreover, some didn't seem to understand that the *whole* point of
"Friendly AI" is to create an unusal, somehow very constrained AI,
that simply won't convert the Earth and the Solar System according
to its own needs, completely neglecting the insignificant bacteria
that created it. 

> > But maybe there are general and universal principles associated
> > with intelligence. Intelligence means finding patterns and connections,
> > understanding that affecting this part here means affecting this other
> > part over there, intelligence means having a higher sense of physical
> > and moral "ecology".

We have utterly no way to be able to claim this. As Alex goes on:

> Again this is reduced to anthropomorphic intelligence. The AGI will
> have logic based 'cold' intelligence. From this it will probably and
> rightly deduce that morality is a human construct which serves the
> needs of human civilisation.

Even if it deigns to examine the mores and traditions of the tiny
brainless beings that predictably brought it about.

>  A civilisation which it is not a part. Expecting It to adhere to these
> moral codes would be akin to you or I adhering to the moral codes
> of Ants.

Very well put.

> > If you see connections between all the beings than you feel compassion

But that's only because humans were *evolved* to do so after perhaps a
dozen million years.

> > and understanding (and yes these are human feelings, but they are also
> > fundamental components of our intelligence, and a lot of new research
> > shows that without feelings we would no have a conscious intelligence at all).

The research shows that the way *our* brains are organized, this is true.
*We* happen to have hard-coded at very low levels things like genuine
altruism for others. Sharks don't. Tigers don't.  It all depends on how
you earn a living in nature. We happen to be social animals because that's
what worked for anthropoids.  The first truly transcendent AI will be a
one-off.

> How do we code for that groggy morning feeling? or the rush of excitement
> associated with anticipation of something good? All the things which truly
> make us who we are, the things which have driven us and made us take the
> unique forks in our lives.
> 
> These are what give us the basis for our 'Intelligence' our logic, our rationalisation.
>  It is what makes us human. The uploaded us and the AGI will have none of this,

Alex, I actually disagree with this.  An upload of a human being will not
be considered successful if the human traits are lost.  But the AGI need
have none of it.

> At most we can hope for some minor 'don't do this because it's bad'
> type of rules in its main code. But if we have given it the ability to
> change it's code, what is to stop it overwriting these rules based
> upon some logical conclusion that it comes to?

Right.  Nothing.  Managing to create "Friendly AI" is extremely challenging,
and my own belief is that the first successful AIs that truly surpass human
intelligence will be developed by people and processes that don't concern
themselves with such niceties.  Somewhere it will be done, and by people
who don't care about the consequences, or are too naive to worry about
them.

> It may well be the cold inteligent decision to pre-emptively exterminate
> a potential threat. After all, it wouldn't feel bad about it, it wouldn't feel anything.

Right.  In all likelihood, the first dangerous AI won't feel anything. That's
why we can only hope that Eliezer's writings (see the SL4 archives) are
paid attention to by anyone getting close to success. Even then the risks
are enormous, but as John says, we have no choice. It's going to happen
one way or another. 

But the future is *so* uncertain.  Perhaps there won't be a hard take-off,
and we can enlist the first AIs as allies.  Perhaps ingenious strategies like
those of Rolf Nelson (see http://www.singinst.org/blog/2007/11/04/rolf-nelson-on-ai-beliefs/)
may work out.  Who knows?  All is hardly lost, but please don't think that any
kind of "reason" or "logic" or human feelings that you can appeal to will have
any effect on such a new creature.

Thanks, Alex, for putting it well.

Lee

_______________________________________________
extropy-chat mailing list
extropy-chat at lists.extropy.org
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat


       
---------------------------------
Looking for last minute shopping deals?  Find them fast with Yahoo! Search.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20080304/d56dc3fc/attachment.html>


More information about the extropy-chat mailing list