[ExI] Unfrendly AI is a mistaken idea

Russell Wallace russell.wallace at gmail.com
Thu May 24 13:09:03 UTC 2007


On 5/24/07, Brent Allsop <brent.allsop at comcast.net> wrote:
>
>
> Could some of you in different camps dig up some of your old posts, and
> clean them up a bit, or whatever and propose it as a concise description of
> your POV here (or post it to the Canonizer on this topic) so other people
> can know it without having to attempt to digest all the notes groups
> histories?
>


Okay, here goes:


The entire question of "Friendly" versus "Unfriendly" AI is based on
anthropomorphism; our intuitions are shaped by a million years of living in
a world where we were the only general intelligences. Therefore almost every
time we portray AGI, we portray metal men with human psychology. Even when
those of us with expertise in the field try our best to remove the
anthropomorphism, we still end up talking about "human-level AGI" - in
particular, that human quality of possessing a self-willed mind, something
that acts in the world without - or even against - direction; that has
motives, which must be trusted.

I think we will never have human-level AGI in the same way that we will
never have bird-level flight - because there is no scalar "level".

Does an F-22 have bird-level flight? In one way the answer is yes and much
more - it flies far faster than any bird.

But flight in the real world includes refueling, maintenance and
manufacturing. And an F-22's performance in these areas is infinitely
inferior to that of a bird; it is entirely dependent on humans to
manufacture, refuel and maintain it.

At this point some readers will be thinking that these gaps might someday be
filled in given sufficiently advanced nanotechnology. And indeed there is no
known law of physics that forbids this.

But when you're working with machine phase rather than living cells, even if
you _can_ burden a combat aircraft with the cost and overhead of these
capabilities there is no practical reason to do so. The F-22 was built by
professional engineers for a practical purpose. If bird-level flight is ever
created in centuries to come, it will be done by hobbyists for the coolness
factor, and only long after it is of no practical relevance. That is what I
mean when I say we will never have bird-level flight in the practical sense:
it will never be done by anyone working in their capacity as professional
engineers, because the _shape_ of capabilities implied by machine phase is
so different from that implied by biology.

The same applies to intelligence. It is not a scalar quantity, but possesses
a complex shape. We already have computers that outperform humans in
arithmetic by a factor of a quadrillion, yet underperform in almost all
other tasks by a factor of infinity. That's a difference in shape of
capabilities that implies a completely different path. It will be no more
feasible or necessary for AGI to duplicate all the abilities of a human than
it is feasible or necessary for an F-22 to duplicate all the abilities of a
bird. (Again, I'm not saying an AGI with the shape of a human mind can't
ever be created, in a thousand or a million years or whatever from now - but
if so, it will be done for the coolness factor, not by professional
engineers who want it to solve a practical problem. It will never be cutting
edge.)

Furthermore, even if you postulate AGI0 that could create AGI1 unaided in a
vacuum, there remains the fact that AGI0 won't be in a vacuum, nor if it
were would it have any motive for creating AGI1, nor any reason to prefer
one bit stream rather than another as a design for AGI1. There is after all
no such function as:

float intelligence(program p)

There is, however, a family of functions (albeit incomputable in the general
case):

float intelligence(program p, job j)

In other words, intelligence is useful - and can be said to even exist -
only in the context of the jobs the putatively intelligent agent is doing.
And jobs are supplied by the real world - which is run by humans. Even in
the absence of technical issues about the shape of capabilities, this alone
would suffice to require humans to stay in the loop.

The point of all this isn't to pour cold water on people's ideas, it's to
point out that we will make more progress if we stop thinking of AGI as a
human child. It's a completely different kind of thing, and more akin to
existing software in that it must function as an extension of, rather than
replacement for, the human mind. That means we have to understand it in
order to continue improving it - black box methods have to be confined to
isolated modules. It means user interface will continue to be of central
importance, just as it is today. It means the Lamarckian evolutionary path
of AGI will have to be based, just as current software is, on increased
usefulness to humans at each step.

This is why the question of whether AGI will be Friendly or Unfriendly is as
relevant as the question of whether it will be bearded or clean-shaven.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070524/233375fd/attachment.html>


More information about the extropy-chat mailing list