[ExI] Unfrendly AI is a mistaken idea.

Lee Corbin lcorbin at rawbw.com
Thu May 24 15:22:44 UTC 2007


Stathis writes, among very thoughtful and (so it seems to me)
correct analyses

> On 24/05/07, Eugen Leitl <eugen at leitl.org> wrote:

> 1) can we construct such mentally defect artifical agents, before
>   we can build the other kind?

> I would have assumed that it is easier to build machines
> without emotions. I don't doubt that computers can have
> emotions, because the belief that the brain is a machine
> necessitates this. However, although I see much evidence
> of intelligence in computers even today, I don't see
> evidence of emotions.

I would suppose that emotions are as difficult to "stumble
upon" as intelligence or any other highly evolved purposeful
behavior. It would never happen as a mere by-product.

> This is a bit perplexing, because in the animal kingdom it
> doesn't take much intelligence to be able to experience an
> emotion as basic and crude as pain.

Well, there was (and is) the plant kingdom millions of years
older than the animal kingdom. Does that go a ways in
illustrating that emotions are highly evolved?

> It should in theory be possible to write a program which does
> little more than experience pain when it is run,

I'm afraid so.

> perhaps in proportion to some input variable so that the
> programmer can then torture his creation.

The mere thought of that is what has prompted Eliezer and others
to want their Friendly AI to have total control, and total oversight
over everyone and everything else  just to thwart such a horrible
possibility. (I don't agree with them on this, considering such
derangement to be too rare to bother with, but I understand the
sentiment!)

> Maybe such programs are already being accidentally implemented
> as subroutines in larger programs, and we just don't know it.

For the reason I gave above, it sounds impossible. It might be
as improbable as if  we accidentally crafted a presently existing
computer program to have the linguistic capability of writing as
well as Shakespeare did.

Lee


2) is a population of such a stable system?

Probably more stable than a population of machines which already know what they want and are busily scheming and self-modifying to 
get it. However, as you have argued before there is always the possibility that some individual in a population of tame AI's will 
spontaneously turn rogue, and then lord it over all the other AI's and humans. On the other hand, a rogue AI will not necessarily 
have any competitive advantage in terms of intelligence or power compared to its tame siblings.

3) is it a good idea?

I think the safest way to proceed is to create AI's with the motivation of the disinterested scientist, interested only in solving 
intellectual problems (which is not present in the example of the schizophrenic). This would even be preferable to designing them to 
love humans; many of the greatest monsters of history thought they were doing the best thing for humanity.

-- 
Stathis Papaioannou




More information about the extropy-chat mailing list