[ExI] Unfrendly AI is a mistaken idea.

Lee Corbin lcorbin at rawbw.com
Tue May 29 04:53:37 UTC 2007


Stathis writes

> It would be the same if we had been born with the top level goal,
> "love and obey your master". No matter how well we understood
> it, how smart we became, we would be no more likely to try to
> overthrow it than we would be likely to overthrow our will to survive.

Right. And this is what we must *aim* for when working with AIs.
This OF COURSE does not mean that there are no risks, as some
would like to mischaracterize our position.

In fact, I am hopeful that if there is a hard AI-takeoff, then whichever
human agency got the ball rolling would at least have the sense to try
to safeguard their own wellbeing, even if not mine.  But all I can say
to those working on AI is "Please hurry; others working on it may
not be as nice as you are."

> This does not mean that the top level goal could never be overthrown,
> cause people do go mad and kill themselves, but it wouldn't be
> as a result *of* increased intelligence and understanding. 

Yes, at least it would not be a *direct* result.  Clearly, increased
capabilities do make for increased opportunities.

Lee



More information about the extropy-chat mailing list