[ExI] What might be enough for a friendly AI?

John Clark jonkc at bellsouth.net
Fri Nov 19 17:10:25 UTC 2010


On Nov 18, 2010, at 8:18 PM, Keith Henson wrote:

> Genes build motivations into people

Genes may try to motivate people but they often fail because genes are stupid, hence the invention of celibacy and condoms. And I think sometimes (I'm not accusing you of this) people confuse Freud with Mendel; genes are selfish but that does not prove that deep down in our subconscious we must be selfish too. 

> to be well regarded by their peers.  That seems to me to be a decent meta goal for an AI.

Perhaps but irrelevant, for a Jupiter Brain a peer would not be a human being.
>  
> Human genes (like *all* genes) do have a static meta goal, that of
> continuing to exist in future generations.

But a gene is not a intelligent entity, no intelligence could function with a static meta goal, so imprinting "always obey human beings no matter what" on a smart robot will not work. 
>  
> I don't think that striving to be well regarded is an inflexible meta
> goal.  I think it would keep an AI from turning into a psychopathic killer.  

You pointed out that you only need about 20 watts to power the human brain, and I doubt you would argue with my comment that nanotechnology would almost certainly be able to do much better than that or that by then vast amounts of energy would be available; so from one point of view a psychopathic killing spree may be no more controversial than cleaning a dirty surface with some Lysol disinfectant. You and I don't have that viewpoint or anything close to it, but I doubt if a Jupiter Brain would be much interested in our opinion.  

 John K Clark 


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20101119/0eb942ab/attachment.html>


More information about the extropy-chat mailing list