<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=iso-8859-1">
<META content="MSHTML 6.00.2900.2180" name=GENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY bgColor=#ffffff>
<DIV><FONT face=Arial size=2>Was that the "I love you" virus?</FONT></DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV><FONT face=Arial size=2></FONT> </DIV>
<DIV>Gina "Nanogirl" Miller<BR>Nanotechnology Industries<BR><A
href="http://www.nanoindustries.com">http://www.nanoindustries.com</A><BR>Personal:
<A
href="http://www.nanogirl.com/index2.html">http://www.nanogirl.com/index2.html</A><BR>Foresight
Senior Associate <A
href="http://www.foresight.org">http://www.foresight.org</A><BR>Nanotechnology
Advisor Extropy Institute <A
href="http://www.extropy.org">http://www.extropy.org</A><BR>My New Project:
Microscope Jewelry<BR><A
href="http://www.nanogirl.com/crafts/microjewelry.htm">http://www.nanogirl.com/crafts/microjewelry.htm</A><BR>Email:
<A
href="mailto:nanogirl@halcyon.com">nanogirl@halcyon.com</A><BR>"Nanotechnology:
Solutions for the future."<BR></DIV>
<BLOCKQUOTE
style="PADDING-RIGHT: 0px; PADDING-LEFT: 5px; MARGIN-LEFT: 5px; BORDER-LEFT: #000000 2px solid; MARGIN-RIGHT: 0px">
<DIV style="FONT: 10pt arial">----- Original Message ----- </DIV>
<DIV
style="BACKGROUND: #e4e4e4; FONT: 10pt arial; font-color: black"><B>From:</B>
<A title=hkhenson@rogers.com href="mailto:hkhenson@rogers.com">Keith
Henson</A> </DIV>
<DIV style="FONT: 10pt arial"><B>To:</B> <A
title=extropy-chat@lists.extropy.org
href="mailto:extropy-chat@lists.extropy.org">ExI chat list</A> </DIV>
<DIV style="FONT: 10pt arial"><B>Sent:</B> Friday, December 03, 2004 8:11
PM</DIV>
<DIV style="FONT: 10pt arial"><B>Subject:</B> Re: [extropy-chat] The emergence
of AI</DIV>
<DIV><BR></DIV>At 08:31 PM 03/12/04 -0500, Eliezer wrote:<BR>>Hal Finney
wrote:<BR><BR>snip<BR><BR>>>Are you serious? 30 seconds, once the
AI reaches human level? What on<BR>>>earth could yet another
human-level contributor to the team accomplish<BR>>>in that
time?<BR>><BR>>"Human level" is an idiosyncratic flavor. I am
talking about an AI that <BR>>is passing through "roughly humanish breadth
of generally applicable <BR>>intelligence if not to humanish things".
At this time I expect the AI to <BR>>be smack dab in the middle of the hard
takeoff, and already writing code <BR>>at AI timescales.<BR><BR>I have a
strong emotional inclination to support Hal's view rather than <BR>Eliezer's
apocalyptic view.<BR><BR>However . . . .<BR><BR>About two years ago there was
that virus, I forget the name, that infected <BR>something like 75,000
vulnerable Microsoft database machines that were <BR>connected to the
Internet. Made a mess of the net for a few days till the <BR>machines
were taken off line and patched.<BR><BR>But what's important about this
episode and possibly provides an <BR>instructive real world example for AI
timing is that the number of infected <BR>machines was later analyzed from the
records to have a doubling time of 8.5 <BR>plus or minus one second.
People just can't react to a threat that comes <BR>"out of the blue" this
fast.<BR><BR>If an AI was smart enough to write a virus to take over machines
and its <BR>intelligence was proportional to "cortical area" (processor
cycles) and it <BR>"wanted" to get smart fast . . . .
.<BR><BR>:-(<BR><BR>Keith
Henson<BR><BR>_______________________________________________<BR>extropy-chat
mailing list<BR><A
href="mailto:extropy-chat@lists.extropy.org">extropy-chat@lists.extropy.org</A><BR><A
href="http://lists.extropy.org/mailman/listinfo/extropy-chat">http://lists.extropy.org/mailman/listinfo/extropy-chat</A><BR></BLOCKQUOTE></BODY></HTML>