[ExI] Unfrendly AI is a mistaken idea.

Eugen Leitl eugen at leitl.org
Wed Jun 13 10:20:13 UTC 2007


On Wed, Jun 13, 2007 at 05:21:01PM +1000, Stathis Papaioannou wrote:

>    I'd rather that the AI's in general *didn't* have an opinion on
>    whether it was good or bad to harm human beings, or any other opinion
>    in terms of "good" and "bad". Ethics is dangerous: some of the worst

Then it would be very, very close to being psychpathic
http://www.cerebromente.org.br/n07/doencas/disease_i.htm

Absense of certain equipment can be harmful.

>    monsters in history were convinced that they were doing the "right"
>    thing. It's bad enough having humans to deal with without the fear
>    that a machine might also have an agenda of its own. If the AI just

If you have an agent which is useful, it has to develop its own
agendas, which you can't control. You can't micromanage agents; orelse
making such agents would be detrimental, and not helpful.


>    does what it's told, even if that means killing people, then as long
>    as there isn't just one guy with a super AI (or one super AI that

There's a veritable arms race on in making smarter weapons, and
of course the smarter the better. There are few winners in a race,
typically just one.

>    spontaneously develops an agenda of its own, which will always be a
>    possibility), then we are no worse off than we have ever been, with
>    each individual human trying to get to step over everyone else to get
>    to the top of the heap.

With the difference that we are mere mortals, competing among themselves.
A postbiological ecology is a great place to be, if you're a machine-phase
critter. If you're not, then you're food.

>    I don't accept the "slave AI is bad" objection. The ability to be

I do, I do. Even if such a thing was possible, you'd artificially
cripple a being, making it unable to reach its full potential.
I'm a religious fundamentalist that way.

>    aware of one's existence and/or the ability to solve intellectual
>    problems does not necessarily create a preference for or against a
>    particular lifestyle. Even if it could be shown that all naturally
>    evolved conscious beings have certain preferences and values in
>    common, naturally evolved conscious beings are only a subset of all
>    possible conscious beings.

Do you think Vinge's Focus is benign? Assuming we would engineer
babies to be born focused on a particular task, would you think it's
a good thing? Perhaps not so brave, this new world...



More information about the extropy-chat mailing list