[ExI] What might be enough for a friendly AI?.

John Clark jonkc at bellsouth.net
Thu Nov 18 18:30:56 UTC 2010


On Nov 16, 2010, at 11:53 PM, Keith Henson wrote:

> we have the ability to "look in the back of the book" given that human exhibit intelligence.

Yes and that fact is of enormous importance, we don't need to understand how a intelligent machine works to build one. That really shouldn't be surprising, Evolution's understanding of how intelligent machines work was even poorer than ours but it managed to build one nevertheless; although it must be admitted it took a long time. It's great that we have the teacher's edition of the textbook that contains all the answers, that should save us loads of time.
>  
> (Sometimes I wonder.)  I don't think the problem is as difficult at the hardware level as
> people have been thinking.  

I too have had that suspicion; look at ravens, they seem at least as intelligent as chimps but their brain is tiny. 

> Eventually--if we can do even as well as nature did--a human level AI should run on 20 watts.

Nanotechnology should be able to do dramatically better than that as it is not limited to the materials and manufacturing processes that life uses. And given the colossal amount of energy a Jupiter Brain would have at its disposal it would have a godlike intellect, unless positive feedback doomed it to an eternity of drug induced happy navel gazing.    
> 
> As far as the aspect of making AIs friendly, that may not be so hard either.

When people talk about friendly AI they're not really talking about a friend, they're talking about a slave, and they idea that you can permanently enslave something astronomically smarter than yourself is nuts.

> That seems to me to be a decent meta goal for an AI.

Human beings have no absolute static meta-goal, not even the goal for self preservation, and there are excellent reasons to think no intelligent entity could. Turing proved that in general there is no way to know if you are in a infinite loop or not, and a inflexible meta-goal would be a infinite loop magnet. Real minds don't have that problem because when they work on a problem or a task for a long time and make no progress they just say fuck it and move on to another problem that might not keep them in a rut. So there is no way Asimov's 3 laws would work in the real world.

 John K Clark



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20101118/27d44e11/attachment.html>


More information about the extropy-chat mailing list