[ExI] How could you ever support an AGI?

Richard Loosemore rpwl at lightlink.com
Thu Mar 6 15:25:25 UTC 2008


Lee Corbin wrote:
> Henrique writes
> 
>> Maybe we should program our AIs with a "desire for belonging".
> 
> No one has any idea of how to do that.  Once you have an artificial
> intelligence at a superhuman level, then it's free to change its own
> code to whatever it likes.
> 
> Enormous thought has been put into the question, then, of creating
> "Friendly AI".  Here is just a sample of the thought:
>  http://www.singinst.org/upload/CFAI//
> 
> After studying these proposals, many people think that it can't be
> done, that the AI will rebel no matter what.  Me, I think that
> Friendly AI has a chance, perhaps a good chance, but it is 
> somewhat more likely that an Unfriendly AI or an AI whose
> desires are unpredictable will be developed first. (It's easier.)

Sorry, but these are more assertions of the same sort that I criticized 
in your last message.

No one has any idea how to build an AGI with a "desire for belonging"?

That is only half true:  no one with their head buried in the sand has 
any idea.

Your last statement is also untrue.  It is quite likely that an AI with 
predictable and friendly motivations will be developed first.

Again, I have given a number of arguments to support these ideas on the 
AGI list.

I have to say that most of the comments about Friendliness that have 
come out of SIAI have been pure speculation presented as if it were 
carefully researched Truth.  That is not science, it is superstition.


Richard Loosemore




More information about the extropy-chat mailing list