[ExI] Nine Misunderstandings About AI

Damien Broderick thespike at satx.rr.com
Tue Apr 8 19:55:17 UTC 2008

At 03:01 PM 4/8/2008 -0400, Richard wrote:


I don't know what you're saying in this par:

<8.  It is often assumed that there will be large numbers of robots, 
but they will all be controlled by different governments or 
corporations, and used as instruments of power. The main argument 
against this idea is that it would require an extremely unlikely 
combination of circumstances for this kind of situation to become 
established. The first artificial intelligence would have to be both 
smart and designed to be aggressive, but this combination would be 
almost impossible to pull off, even for a military organization. The 
long version of the argument against this idea is too long to 
summarize in one paragraph, but the bottom line is that even though 
this seems like a reasonable and plausible possibility for the 
future, it turns out to be deeply implausible when examined carefully. >

Perhaps you mean the idea that ONLY large entities, governmental and 
corporate, would have AIs/bots, as is the case these days with 
aircraft carriers and nuclear power stations. All others would be 
illegal. If so, what has this to do with the first AIs being 
*aggressive*? Designed for death-dealing?

Or are you arguing against a claim (perhaps akin to Asimov's 
positronic brains with structured-in Laws) that there'll be many 
robots but all necessarily of the same architecture--except now it 
would be aggressive?

Damien Broderick 

More information about the extropy-chat mailing list