[ExI] How could you ever support an AGI?

giovanni santost santostasigio at yahoo.com
Wed Mar 5 04:56:05 UTC 2008


There is a lot of recent research (I will find the sources and post them) that suggests that without "feelings" there is not consciousness. What we call feelings are fast logical template responses to reality situations (danger=fear= fight or run) that are perfectly logical given the environmental stimulus. Our intelligence is an aggreagate of many of these templates, or at least feelings make a kind of glue that keeps our intelligence and sense of self together. The Cold intelligence that you evoke without explaining that really means is maybe not just undesirable but likely impossible.
AGI without feelings could be really not an Intelligence at all (at least not self aware kind).
Your point about the ants actually is good, in fact our morality is much more complex because we are more intelligent than ants and this morality tends to include, protect and respect as our intelligence increases. It is not unlogical to think that AGI would have a morality even higher (in the sense of even more comprehensive inclusion, protection, respect values) than ours.



ABlainey at aol.com wrote: In a message dated 05/03/2008 00:21:57 GMT Standard Time, santostasigio at yahoo.com writes:
 
 
 Well,
 about the anthropomorphisizing of AGI, you say in the end that some say the motivation of AGI will be the one we program into it. Exactly, that is my point, it is difficult for us to create an intelligence utterly alien when the only example of intelligence we have is us.
 
 I can't help but notice that many of the posts have started out with logic and concluded with quazi-anthropomorphic, straw man arguments.
 I understand that an AGI will or should be based upon 'human intelligence,' however the end result will be completely Alien to us. So much so that our interpretation of intelligence wouldn't really fit. 
 
 
 
 But maybe there are general and universal principles associated with intelligence.
 Intelligence means finding patterns and connections, understanding that affecting this part here means affecting this other part over there, intelligence means having a higher sense of physical and moral "ecology".
 
 Again this is reduced to anthropomorphic intelligence. The AGI will have logic based 'cold' intelligence. From this it will probably and rightly deduce that morality is a human construct which serves the needs of human civilisation. A civilisation which it is not a part. Expecting It to adhere to these moral codes would be akin to you or I adhering to the moral codes of Ants. So If someone comes on your property, bite their head off.
 
 
 
 If you see connections between all the beings than you feel compassion and understanding (and yes these are human feelings, but they are also fundamental components of our intelligence, and a lot of new research shows that without feelings we would no have a conscious intelligence at all).
 
 My point. We would like to think that we can reduce ourselves to simple data constructs which mirror our original wetware physical structure. Expecting that this 'uploaded' us would run in the same manner that we do today. How do we code for that groggy morning feeling? or the rush of excitement associated with anticipation of something good? All the things which truly make us who we are, the things which have driven us and made us take the unique forks in our lives.
 These are what give us the basis for our 'Intelligence' our logic, our rationalisation. It is what makes us human.
 The uploaded us and the AGI will have none of this, so will not make intelligent decisions the way we do. that is what I mean by 'Cold' intelligence. It is devoid of chemical input. Show me a line of code for Happy, Sad, Remorse.
 At most we can hope for some minor 'don't do this because it's bad' type of rules in its main code. But if we have given it the ability to change it's code, what is to stop it overwriting these rules based upon some logical conclusion that it comes to?
 If we hard wire the rules, what is to stop it creating its own 'offspring' without these rules? Whatever we do, it will have the logic to undo and far faster than we can counter any mistakes or oversights.
 
 
 Yes we exterminate bugs, but usually in limited situations (like in our house or on a crop). It would be unacceptable for mankind to have a global plan to complete exterminate all the roaches of the earth even if it could be done.
 And it is difficult to have feelings for bug, it would not make sense ecologically, it would not be the intelligent thing to do, and by defintion AGI is supposed to be Intelligent.
 
 
 Again anthropomorphically intelligent. It may well be the cold inteligent decission to pre-emptively exterminate a potential threat. After all, it wouldn't feel bad about it, it wouldn't feel anything.
 
 Alex   _______________________________________________
extropy-chat mailing list
extropy-chat at lists.extropy.org
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat


       
---------------------------------
Never miss a thing.   Make Yahoo your homepage.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20080304/cbef2f92/attachment.html>


More information about the extropy-chat mailing list