[ExI] related replies

Stathis Papaioannou stathisp at gmail.com
Mon Mar 1 00:46:37 UTC 2010


On 1 March 2010 06:39, Keith Henson <hkeithhenson at gmail.com> wrote:
> On Sun, Feb 28, 2010 at 5:00 AM,  Stathis Papaioannou wrote:
>
>> There is no requirement on AI researchers to make an AI by following
>> the structure and function of the brain. However, if they do do it
>> this way, as mind uploading researchers would, the resulting AI will
>> both behave like a human and have the consciousness of a human.
>
> This way is *exceeding* dangerous unless we deeply understand the
> biological mechanisms of human behavior, particularly behaviors such
> as capture-bonding which are turned on by behavioral switches (due to
> external situations).  A powerful human type AI in "war mode,"
> irrational and controlling millions of robot fighting machines is not
> something you want to happen.
>
> Humans do go irrational (for reasons firmly rooted in our evolutionary
> past).  See the descriptions of the mental state the machete killers
> in Rwanda were in while they killed close to a million people.

I don't think a mind upload would be dangerous initially because it
would not have either superintelligence or the ability to self-modify.
However, researchers could then go on and add these qualities as a
further project much more easily than they could to a biological
brain, and that is when it might get dangerous.


-- 
Stathis Papaioannou



More information about the extropy-chat mailing list