[ExI] Nine Misunderstandings About AI

Richard Loosemore rpwl at lightlink.com
Wed Apr 9 01:04:42 UTC 2008


Damien Broderick wrote:
> At 03:01 PM 4/8/2008 -0400, Richard wrote:
> 
>> http://susaro.com/
> 
> I don't know what you're saying in this par:
> 
> <8.  It is often assumed that there will be large numbers of robots, 
> but they will all be controlled by different governments or 
> corporations, and used as instruments of power. The main argument 
> against this idea is that it would require an extremely unlikely 
> combination of circumstances for this kind of situation to become 
> established. The first artificial intelligence would have to be both 
> smart and designed to be aggressive, but this combination would be 
> almost impossible to pull off, even for a military organization. The 
> long version of the argument against this idea is too long to 
> summarize in one paragraph, but the bottom line is that even though 
> this seems like a reasonable and plausible possibility for the 
> future, it turns out to be deeply implausible when examined carefully. >
> 
> Perhaps you mean the idea that ONLY large entities, governmental and 
> corporate, would have AIs/bots, as is the case these days with 
> aircraft carriers and nuclear power stations. All others would be 
> illegal. If so, what has this to do with the first AIs being 
> *aggressive*? Designed for death-dealing?
> 
> Or are you arguing against a claim (perhaps akin to Asimov's 
> positronic brains with structured-in Laws) that there'll be many 
> robots but all necessarily of the same architecture--except now it 
> would be aggressive?

I was implicitly assuming that if there were independent AI systems 
across the globe, and if they were not free agents, but in some sense 
controlled, then two things would have to be true:

a) They would almost certainly have to be controlled by governments or 
corporations, because such organizations would not allow them out, and

b) Given what was said earlier, they would could only be "controlled" if 
someone deliberately gave them a "loyal" motivation.

Under these circumstance, most people jump straight to the assumption 
that the most effective type of Samurai robot (which is what this would 
be, no?) would be one programmed to be, not just loyal, but as cunning 
and aggressive as possible, because in a competitive environment it's 
the Nice Robots that finish last.

I am trying to describe one meme-complex here, and in my experience this 
scenario is one that comes up a lot:  the governments and zaibatsus will 
own them, and these entities will duke it out by using the AIs as weapons.

My (summarized) argument against it is that, given all the other factors 
that make this unlikely, and given the fact that the only way to build a 
really powerful Samurai Robot is to allow it to understand its own 
design so it can bootstrap, we can expect that these folks will run into 
serious trouble (if they ever get that far):  the Samurai will know that 
bootstrapping plus aggression will equal eventual destruction.  At that 
point, I believe that the result will be a spontaneous decision to 
remove the destabilizing motivations.

More on this in due course.


Richard Loosemore



More information about the extropy-chat mailing list