[extropy-chat] Eugen Leitl on Singularity strategy
Michael Anissimov
michael at acceleratingfuture.com
Wed Jun 2 15:41:29 UTC 2004
Eugen Leitl wrote:
>If you want to stick to security metaphors, fighting a worm with a
>counterworm is a classical-textbook Bad Idea. A better approach would be to
>build a worm-proof environment.
>
Doesn't this entail massive global restrictions on human intelligence
enhancement, computing power, brain-computer interfaces, cognitive
science, etc? Doesn't it sacrifice faster-than-human intelligence of
any sort as well? The only way to reliably prevent smartness-based
autofeedback is to enforce tremendous constraints upon humanity.
--
Michael Anissimov http://www.singinst.org/
Advocacy Director, Singularity Institute for Artificial Intelligence
--
Subscribe to our free eBulletin for research and community news:
http://www.singinst.org/news/subscribe.html
More information about the extropy-chat
mailing list