<br><div><span class="gmail_quote">On 2/14/06, <b class="gmail_sendername"><a href="http://kevinfreels.com">kevinfreels.com</a></b> <<a href="mailto:kevin@kevinfreels.com">kevin@kevinfreels.com</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
You are asking about fundamental limits on the rate at which a super AI becomes <br>smarter and I am wondering how anyone could answer that question.</blockquote><div><br>Not really. I was dealing with the fundamental limits of the physics of the hardware upon which the AI operates. I can't do much computing with a single atom (store information maybe, but not much computation). Since the AI has to sooner or later erase bits it is going to generate heat. Failure to remove the heat melts the hardware. The requirement for thinking within the heat removal capacity limits the thought capacity (and presumably the intelligence) that the hardware can support. The same type of reasoning applies to the energy which is required to support faster computations. (You can compute using the latent heat extracted from the environment but it is going to be a very slow computation.)
<br><br>Compare it to human intelligence and its instantiation (brain + body). Cut the brain off from the body (the radiator) and supply it with all the glucose it needs and it will probably cook itself. Cut it off from the glucose supply and it can't do much at all. Cut selected sets of neurons between different functional parts of the brain and you should see the "intelligence" slowly melt away.
<br><br>There are fundamental limits as to how much "intelligence" you can get out of specific numbers of photons, electrons, atoms, joules, radiator surface area, etc.<br><br>I think John is trying to make the case that the AI is going to sneak up on us and suddenly manifest itself as the overlord or that each country is going to try to build its own superintelligence. What I was attempting to point out is that we don't need to allow that to happen. A Playstation 5 or 6 is probably going to have the computational capacity to enable more than human level intelligence (though I doubt the computational architecture will facilitate that). One can however always unplug them if they get out of line.
<br></div><br>Its obviously relatively easy for other countries to detect situations where some crazy person (country) is engaging in unmonitored superintelligence development. Anytime they start constructing power generating capacity significantly in excess of what the people are apparently consuming and/or start constructing cooling towers for not only a reactor but a reactor + all of the electricity it produces then it will be obvious what is going on and steps can be taken to deal with the situation.
<br><br>The point is that these things don't happen overnight. The slow growth scenario involving parasitic sucking off of CPU cycles is of concern as is allowing ourselves to become overly dependent on highly interconnected networks which do not allow human oversight for things like software "upgrades". [Though I will admit we are getting close to that now. I have *not* reviewed every line of source code in the many many megabytes of software I've installed over the last couple of months (two Linux installs and associated packages). Its only because the hardware isn't fast enough yet to support an AI that I'm not too worried about it. But that day is coming.]
<br></div><br>Robert<br><br>