[ExI] Searle's review of Bostrom's book

Tim Tyler tim at tt1.org
Sun Sep 21 10:32:20 UTC 2014


On 20/09/2014 08:18, Bill Hibbard wrote:

> http://www.ssec.wisc.edu/~billh/g/searle_comment.pdf

Searle is completely confused (news at eleven).  However Bill's
commentary says:

"Computers that can run the entire world economy and provide constant
  companionship to all humans will pose great danger to humans."

This is speculation. Computers powerful enough to run the entire
world economy could have the *potential* to pose great danger
to humans.

However, humans are already at risk. Unless we develop
superintelligent machines we'll all be obliterated.

Superintelligent machines are surely likely to *reduce*
the chance of this happening.  IMO, the sooner we develop
them, the safer we will be.

What are the alternatives? A big war that keeps us stuck
in the stone age? A luddite totalitarian government that
criminalizes research and doesn't perform any itself?
Are these alternatives *really* any safer?  Or would it
be fair to say that they "pose great danger to humans".
-- 
__________
  |im |yler http://timtyler.org/ tim at tt1lock.org Remove lock to reply.




More information about the extropy-chat mailing list