[ExI] Unfrendly AI is a mistaken idea.

Stathis Papaioannou stathisp at gmail.com
Sat Jun 2 05:44:24 UTC 2007

On 02/06/07, Christopher Healey <CHealey at unicom-inc.com> wrote:
> > Stathis Papaioannou wrote:
> >
> > We don't have human level AI, but we have lots of dumb AI. In
> > nature, dumb organisms are no less inclined to try to take over
> > than smarter organisms
> Yes, but motivation and competence are not the same thing.  Considering
> two organisms that are equivalent in functional capability, varying only
> intelligence level, the smarter ones succeed more often. However, within
> a small range of intelligence variation, other factors contribute to
> one's aggregate ability to execute those better plans.   So If I'm a
> smart chimpanzee, but I'm physically weak, following particular courses
> of action that may be more optimal in general carries greater risk.
> Adjusting for that risk may actually leave me with a smaller range of
> options than if I was physically stronger and a bit less smart.  But
> when intelligence differential is large, those other factors become very
> small indeed.  Humans don't worry about chimpanzee politics (no jokes
> here please :o) because our only salient competition is other humans.
> We worry about those entities that possess an intelligence that is at
> least in the same range as our own.

We worry about viruses and bacteria, and they're not very smart. We worry
about giant meteorites that might be heading our way, and they're even
dumber than viruses and bacteria.

Smart chimpanzees are not going to take over our civilization anytime
> soon, but a smarter and otherwise well-adapted chimp will probably be
> inclined and succeed in leading its band of peers.

All else being equal, which is not generally the case.

> (and no less capable of succeeding, as a
> > general rule, but leave that point for the sake of argument).
> I don't want to leave it, because this is a critical point.  As I
> mentioned above, in nature you rarely see intelligence considered as an
> isolated variable, and in evolution, intelligence is the product of a
> red queen race.  By definition (of a red queen race), you're
> intelligence isn't going to be radically different from your direct
> competition, or the race would never have started or escalated.  So it
> confusingly might not look like you're chances of beating "the Whiz on
> the block" are that disproportionate, but the context is so narrow that
> other factors can overwhelm the effect of intelligence over that limited
> range.  In some sense, our experiential day-to-day understanding of
> intelligence (other humans) biases us to consider its effects over too
> narrow a range of values.  As a general rule, I'd say humans have been
> very much more successful at "taking over" than chimpanzees and salmon,
> and that it is primarily due to our superior intelligence.

Single-celled organisms are even more successful than humans are: they're
everywhere, and for the most part we don't even notice them. Intelligence,
particularly human level intelligence, is just a fluke, like the giraffe's
neck. If it were specially adaptive, why didn't it evolve independently many
times, like various sense organs have? Why don't we see evidence of it
having taken over the universe? We would have to be extraordinarily lucky if
intelligence had some special role in evolution and we happen to be the
first example of it. It's not impossible, but the evidence would suggest

> Given that dumb AI doesn't try to take over, why should smart AI
> > be more inclined to do so?
> I don't think a smart AI would be more inclined to try and take over, a
> priori.

That's an important point. Some people on this list seem to think that an AI
would compute the unfairness of its not being in charge and do something
about it - as if unfairness is something that can be formalised in a
mathematical theorem.

> And why should that segment of smart
> > AI which might try to do so, whether spontaneously or by malicious
> > design, be more successful than all the other AI, which maintains
> > its ancestral motivation to work and improve itself for humans
> The consideration that also needs to be addressed is that the AI may
> maintain its "motivation to work and improve itself for humans", and due
> to this motivation, take over (in some sense at least).  In fact, it has
> been argued by others here (and I tend to agree) that an AGI
> *consistently* pursuing such benign directives must intercede where its
> causal understanding of certain outcomes passes a minimum assurance
> level (which would likely vary based on probability and magnitude of the
> outcome).

I'd feel uncomfortable about an AI that had any feelings or motivations of
its own, even if they were positive ones about humans, especially if it had
the ability to act rather than just advise. It might decide that it had to
keep me locked up for my own good, for example, even though I don't want to
be locked up. I'd feel much safer around an AI which informs me that, using
its greatly superior intelligence, it has determined that I am less likely
to be run over if I never leave home, but what I do with this advice is a
matter of complete indifference to it. So although through accident or
design an AI with motivations and feelings might arise, I think by far the
safest ones, and the ones likely to sell better, will be those with the
minimal motivation set of the disinterested scientist, concerned only with
solving intellectual problems.

Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070602/f8f51f50/attachment.html>

More information about the extropy-chat mailing list