[extropy-chat] Two draft papers: AI and existential risk; heuristics and biases
Robin Hanson
rhanson at gmu.edu
Mon Jun 12 20:18:06 UTC 2006
At 02:29 PM 6/11/2006, Mark Walker wrote:
> > the issue I get stuck on: the idea that a single relatively isolated AI
> > system could suddenly change from negligible to overwhelmingly powerful.
>
>I haven't read the paper you mention here, but I have thought a little about
>the problem. It seems to me that there are two possibilities that might
>allow for a rapid increase in power. One is if creating such a computer it
>is able to break through some congenital limitations we have to our thought
>and knowledge. ...
Sure this is a logical possibility. But that is far from sufficient
to make it the main
scenario one considers.
>The second thought has to do with what counts as a 'single AI'. Think how
>enormously difficult it is to bring 100,000 humans to work on a single task
>was perhaps the case with the Manhattan project. An AI that could create
>human equivalent expert subsystems by deploying the computing power
>necessary to emulate 100,000 humans might be able to work on a single
>problem much more efficiently because of lower communication costs,
>political costs (getting people on board with the idea) and energy costs.
>Now it may be objected that such an AI would constitute an economy unto
>itself because in effect it has modeled a bunch of different experts working
>on a single problem like advanced nano. Perhaps, but then this may be the
>heart of the worry: it could create its own more efficient economy.
There may be things to worry about in this scenario, but they are
very different
things than in the scenario Eliezer focuses on.
Robin Hanson rhanson at gmu.edu http://hanson.gmu.edu
Associate Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326 FAX: 703-993-2323
More information about the extropy-chat
mailing list