On Sat, Dec 29, 2012 at 11:02 AM, Keith Henson <span dir="ltr"><<a href="mailto:hkeithhenson@gmail.com" target="_blank">hkeithhenson@gmail.com</a>></span> wrote:<br><div class="gmail_quote"><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im"><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote">
>> The fastest signals in the human brain move at a couple of hundred meters a<br>
second, many are far slower, light moves at 300 million meters per second.<br>
So if you insist that the 2 most distant parts of a brain communicate as<br>
fast as they do in a human brain (and it is not immediately obvious why you<br>
should insist on such a thing) then parts in a AI could be at least one<br>
million times as distant.<br></blockquote>
<br>
</div>>That's true as long as you want a perception to action cycle no faster<br>
than a human. I suspect that an "arms race" will shorten this cycle<br>
to whatever its minimum is. If that is a million times faster, you<br>
are back to something no bigger than a human brain (within a couple of<br>
orders of magnitude).<br></blockquote><div><br>Sometimes making the correct decision is easy but speed is of the essence. If you put your hand on a hot stove you don't need to engage your consciousness and use all the IQ points at your disposal to puzzle out if it would be a good idea to move your hand or not; in fact the pain signal doesn't even need to travel all the way to the brain, the spinal cord might not be the smartest kid on the block but even it knows that moving that hand off the stove would be a very wise thing to do. I imagine that a Jupiter Brain would use reflexes just like we do in cases where being quick is more important than being deep. <br>
<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
> I am uneasy about a world where the typical post human experiences 50 million subjective years before the end of this calendar century. If you have some thoughts about why this will not or cannot happen, I would be most interested.<br>
</blockquote><div><br>I can't think of any reason it won't happen, so trying to predict what the AI will do in the next second would be like trying to predict what you will do for the next 12 days. Thus predicting what the post singularity world will be like is a hopeless enterprise as is trying to make a AI that always remains "friendly". <br>
<br> John K Clark<br></div><div><br><br><br><br> </div><div><br></div></div>