[ExI] Unfrendly AI is a mistaken idea

Jef Allbright jef at jefallbright.net
Wed May 23 14:38:31 UTC 2007


On 5/23/07, Richard Loosemore <rpwl at lightlink.com> wrote:

> The reason this position is in between yours and the SIAI one is that I
> do believe that the SIAI insistence on provability is an obstruction:
> they will be trying to do this forever, without succeeding, and if the
> rest of us sit around waiting for them to realize that they can never
> succeed, we will waste a ridiculous amount of time.

I think the insistence on provable friendliness was an artifact of the
thinking of a younger Eliezer and that it's extremely probable that
he's realized that friendliness in the strong mathematical sense is
impossible but friendliness in an effective sense is "merely" very
difficult.

It's interesting that the red herring remains; it can serve various
useful purposes while real work proceeds.

The other distraction, in my opinion, is the continued use of the
heavily loaded term "Friendliness" when the problem is actually much
more about sustainable systems of cooperation between highly
asymmetric agents (intentional systems.)

The implications of such research apply much more broadly than to the
ostensible threat of "totalitarian AI" and for that reason I strongly
support development of this thinking.

FWIW,

- Jef



More information about the extropy-chat mailing list