[ExI] Unfriendly AI is a mistaken idea

Lee Corbin lcorbin at rawbw.com
Sun May 27 22:35:13 UTC 2007


Brent writes


> Could we say that one of the points of contention seems to do with
> Motivation of future AI?  Russell, you believe that for the foreseeable
> future tools will just do what we program them to, and they will not
> be motivated, right?  Where as apparently Samantha and I believe
> tools will be increasingly motivated and thereby share this difference
> in our beliefs with you?

This is just like the discussion that John Clark and I are having. I doubt
if you'll get anyone to subscribe to the notion that they'll "will just do what
we program them to".  It's trickier than that.  I would bet that the
hard AI camp (to which I belong and is pretty common) would endorse
this statement:  It is not possible to forecast what an AI will do, but there
are some behaviors that are much, much more probable than others,
given even a scant knowledge of the AI's history."  For a crude example,
an AI that was a descendant of many war-waging AIs would be more
unlikely to become a pacifist than those who were not.

Lee

P.S.  As yes, Brent, it seems to me that there may be so many different
POVs that they might as well be continuous.  But I hope I am wrong, 
and wish you success.  Actually, it looks better than I thought it would.



More information about the extropy-chat mailing list