[ExI] stealth singularity
ablainey at aol.com
ablainey at aol.com
Mon Sep 20 21:14:33 UTC 2010
Idon't agree. In a wetware model yes benevolence does seem to correlatewith higher inteligence to a limit, but not always. Take autism,asphergers for example. Very high or genius level inteligence goes handin hand with the exact oposite of your statement. In extreme casesthere is a complete lack of any benevolence.
Also the interpersonal understanding and empathy needed to bothunderstand and apply benevolence stem from the chemical brain ratherthan the physical structure or the data it contains. Removal of the'feelings' created by these chemical interactions will leed to the wordbenevolence having no meaning. To an unfeeling AI harvesting rawmaterials by ripping your body apart while registering the audiblescreams will be no different to digging up minerals and registering thescraping sound of a bulldozer shovel. If you want a human comparisonyou only need look at a psychopathic killer that does it for someminimal personal gain while destroying the lives of others.
For a closer to home idea you only have to look at a small children andthere lack of empathy. Burning ants with a magnifying glass orsomething equally as cruel. When they are told to stop the first thingthey say is 'Why?'. Even when they do have feelings, they dontempathise with others until taught to do so. That teaching is basedupon them actually having and understanding feelings.
Then you have the issue of morality itself. Which is totally subjectiveand athropomorthic. Our morality and by extension our legal system ismainly based upon natural law. It is wrong to kill, steal etc becausein a natural human world these things are negative for society. Adifferent society has different and often contradictory laws. An AIwith no society membership, no chemical 'feelings' or empathy ,probably not even a sense of self preservation will have no benevolenceand will probably be incapable of understanding the concept.
Our onlyhope is that it will learn the concept from us and somehow see iteither as a positive thing or something that it will abide by.
When humans have been confronted by other inteligences that do notshare our morals or have any empathy for us we can only resort toviolence. Take a big cat for example. It may have been raised with loveand affection whihc it can relate to, but once it reaches a certain ageand size, it is ultimately force and fear that stop it eating orkilling you.
How can we make an AI understand or agree to be benevolent? If it doesn't how can we stop it?
Perhapsit will get to a developmental stage where it occurs naturally, butthere is a very chance that we might be destroyed long before that.
A
-----Original Message-----
From: Brent Allsop <brent.allsop at canonizer.com>
To: ExI chat list <extropy-chat at lists.extropy.org>
Sent: Mon, 20 Sep 2010 18:09
Subject: Re: [ExI] stealth singularity
Spike,
I believe that benevolence is absolutely proportional tointelligence.
I believe that intelligence hiding from us is absolutely evil.
Therefore, you’re proposed possibility suffers from the sameproblem of evil issue any other proposed possible hiding super intelligentbeing / god / ET existing suffers from.
So, I would tend not to even entertain such an unlikely possibility,unless there was profound evidence to support it.
I am an atheist, and that atheism includes not just the hopethere is not yet God, but the hope that there is not yet any intelligent beingsof any kind that is hiding from us.
For if there is any such being, then there is no hope, and we will becondemned to the reality that even if we become as powerful as they are, wewill also likely not be able to overcome the evil of having to hide from otherslike they allegedly are hiding from us.
Brent Allsop
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20100920/504f9461/attachment.html>
More information about the extropy-chat
mailing list