[ExI] related replies
Christopher Luebcke
cluebcke at yahoo.com
Mon Mar 1 18:43:02 UTC 2010
I think y'all are confusing "dangerous" with "fun". Honestly, if controlling millions of robot fighting machines with my lightning-fast, heavily augmented uploaded mind isn't part of the program, then what's the point?
________________________________
From: Dave Sill <sparge at gmail.com>
To: ExI chat list <extropy-chat at lists.extropy.org>
Sent: Mon, March 1, 2010 9:39:18 AM
Subject: Re: [ExI] related replies
On Sun, Feb 28, 2010 at 7:46 PM, Stathis Papaioannou <stathisp at gmail.com> wrote:
> On 1 March 2010 06:39, Keith Henson <hkeithhenson at gmail.com> wrote:
>>
>> This way is *exceeding* dangerous unless we deeply understand the
>> biological mechanisms of human behavior, particularly behaviors such
>> as capture-bonding which are turned on by behavioral switches (due to
>> external situations). A powerful human type AI in "war mode,"
>> irrational and controlling millions of robot fighting machines is not
>> something you want to happen.
>
> I don't think a mind upload would be dangerous initially because it
> would not have either superintelligence or the ability to self-modify.
> However, researchers could then go on and add these qualities as a
> further project much more easily than they could to a biological
> brain, and that is when it might get dangerous.
It also shouldn't be direct control of anything potentially dangerous,
certainly not millions of robot fighting machines.
-Dave
_______________________________________________
extropy-chat mailing list
extropy-chat at lists.extropy.org
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20100301/dc32a898/attachment.html>
More information about the extropy-chat
mailing list