[ExI] Human-level AGI will never happen

spike at rainier66.com spike at rainier66.com
Sat Jan 8 05:35:32 UTC 2022


 

 

From: spike at rainier66.com <spike at rainier66.com> 
Subject: RE: [ExI] Human-level AGI will never happen

 

 

 

…> On Behalf Of Will Steinberg via extropy-chat
Subject: Re: [ExI] Human-level AGI will never happen

 

>…Yeah but the first superhuman-level AGI will probably ask the second superhuman-level AGI to prom and get rejected

 

>…That’s the pessimistic view.  Optimistic view: the first and second super human AGIs will get into contests with each other to see which is smarter.  That should be interesting to watch, and furthermore it would give us an answer to the new question: OK what are humans for now?  Answer: to watch and cheer the AGI smartness contests.

 

spike

 

 

Oh, and one more thing, in case you were celebrating that we may eventually come up with an answer to the age-old question regarding the meaning of life.

 

If one takes an area of tradition human intellectual activity, such as chess, no one is likely to argue with anyone who says we have software capable of beating any human under any conditions.  So we have superhuman software already in chess.

 

If one looks at the world championship computer chess matches and follows some of the games, that would be analogous to humans cheering for AGI smartness contests.  But even a good chess player is baffled by many, if not most, of the moves the software chooses.  We can see their plans worked out, but we don’t understand why.  If the first and second AGI were to have a smartness contest, we might not understand it.

 

spike

 

 

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220107/89a4690e/attachment.htm>


More information about the extropy-chat mailing list