[ExI] related replies

Dave Sill sparge at gmail.com
Mon Mar 1 17:39:18 UTC 2010


On Sun, Feb 28, 2010 at 7:46 PM, Stathis Papaioannou <stathisp at gmail.com> wrote:
> On 1 March 2010 06:39, Keith Henson <hkeithhenson at gmail.com> wrote:
>>
>> This way is *exceeding* dangerous unless we deeply understand the
>> biological mechanisms of human behavior, particularly behaviors such
>> as capture-bonding which are turned on by behavioral switches (due to
>> external situations).  A powerful human type AI in "war mode,"
>> irrational and controlling millions of robot fighting machines is not
>> something you want to happen.
>
> I don't think a mind upload would be dangerous initially because it
> would not have either superintelligence or the ability to self-modify.
> However, researchers could then go on and add these qualities as a
> further project much more easily than they could to a biological
> brain, and that is when it might get dangerous.

It also shouldn't be direct control of anything potentially dangerous,
certainly not millions of robot fighting machines.

-Dave



More information about the extropy-chat mailing list