<div class="gmail_quote">On 11 February 2012 07:43, Keith Henson <span dir="ltr"><<a href="mailto:hkeithhenson@gmail.com">hkeithhenson@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Maybe this isn't important. People went around the earth when it took years.<br clear="all"></blockquote></div><br>We may have already had this discussion before, but I think that if we should take contemporary IT as a good enough metaphor of intelligence in general, "hierarchical structure" is the answer to "latency" and "scarce band".<br>
<br>I am not persuaded that there is any real limit to acceptable latency, given that any arbitrary computational node speaks anyway very quickly with neighbouring nodes, no matter how "distant" it may be from an arbitrary "centre", so the rationale to connect three of them is not so different from having the last one added in a row of 10^10 of them.<br>
<br>This simply means that "long-distance calls" are reduced as much as possible in favour of local computation and data caching.<br><br>Take for instance the contemporary scenario, where we have at one extreme the internal working of registers of single processing unit, then the processor with its internal cache(s), then your possibly multiprocessor board with its RAM, then (virtual?) clusters thereof, then perhaps a configuration such as folding@home where possible latency already may measure in weeks - much higher than what would exist in a ideal, optimised star-sized computronium sphere.<br>
<br>But even in organic brains I suspect that most computations already take place at a "local" level, with neurons firing neighbouring neurons in a limited area, rather than involving the entire system, as the latter solution would pointlessly degrade the overall performance of the same.<br>
<br>-- <br>Stefano Vaj<br>