[ExI] Unfrendly AI is a mistaken idea

Jef Allbright jef at jefallbright.net
Wed May 23 16:24:04 UTC 2007


On 5/23/07, Eugen Leitl <eugen at leitl.org> wrote:
> On Wed, May 23, 2007 at 07:38:31AM -0700, Jef Allbright wrote:
>
> > The other distraction, in my opinion, is the continued use of the
> > heavily loaded term "Friendliness" when the problem is actually much
> > more about sustainable systems of cooperation between highly
> > asymmetric agents (intentional systems.)
>
> I have no problems with language. I have only problems with
> the idea that you can make it work for iterated interactions of
> asymmetric agents.
>
> (I don't think this is feasible, but of course everyone is welcome
> to bloody their own nose on it).

I think the language is important to the extent that it obscures the
problem, and I know it obscures the problem when I observe people
affiliated with SIAI describing the problem in terms of the "nice"
characteristics of people we know as friendly.

I think that sustainable systems of cooperation between highly
asymmetric agents is practical within bounds, and has obvious
application to politics where the government is seen as acting on its
own behalf, to scenarios of guerrilla warfare against a
technologically more advanced opponent, and to scenarios where an
advanced singleton AI is potentially in competition against an
alliance of cooperating but somewhat less capable agents.

Where you and I seem to differ is with regard to a practical ceiling
to the development of intelligence of an AI that has become starved
for novel interaction with its environment.

- Jef



More information about the extropy-chat mailing list