[extropy-chat] The emergence of AI

Robin Hanson rhanson at gmu.edu
Sat Dec 4 20:15:00 UTC 2004


Hal Finney wrote:
>I don't see how it can happen so quickly.  I envision a team with several
>key members and an AI, where the AI gradually begins making a useful
>contribution of its own.  Eventually it becomes so capable that it is
>doing more than the rest of the team, and from that point its competence
>could, conceivably, grow exponentially.  But I don't see any reason why
>this process would go as fast as you describe.

On 12/3/2004, Eliezer Yudkowsky responded:
>Because the AI is nothing remotely like the other team members.  ...   You 
>can't expect that the AI will sit down and try to rewrite a module.  The 
>more relevant capabilities, among those the AI possesses at a given time, 
>are those which operate on a timescale permitting them to be applied to 
>the entire AI at once.  Slower capabilities would be used to rewrite 
>global ones, or forked and distributed onto more hardware, or used to 
>rewrite themselves to a speed where they become globally applicable.  The 
>AI is not like a human.  If you visualize a set of modules yielding 
>capabilities that turn back and rewrite the modules, and play with the 
>possibilities, you will not get any slow curves out of it.  You will get 
>sharp breakthroughs ... a novice must imagine by analogy to humans with 
>strange mental modifications, rather than rethinking the nature of mind 
>(recursively self-improving optimization processes) from scratch.  The 
>interaction between the AI rewriting itself and the programmers poking and 
>prodding the AI from outside will not resemble adding a human to the 
>team.  ... the advantages of AI, the ability to run thousands of different 
>cognitive threads on distributed hardware, and above all the recursive 
>nature of self-improvement (which, this point has to keep on being 
>hammered home, is absolutely unlike anything in human experience).  A lot 
>of this is in section III of LOGI.  Consider rereading 
>it.  http://singinst.org/LOGI/.

We humans are familiar with many forms of recursive self-improvement of 
ourselves.  The richer we get, the more abilities we have to get 
richer.  The more we learn, the more and faster we can learn.  Each new 
general insight we gain can be applied across a wide range of problems we 
face, and all these general insights help us find new general insights 
faster.  Also, computer researchers use faster computers to help them 
design faster computers, and compilers can be set to compile 
compilers  These recursive processes mainly produce at best steady 
exponential improvement at familiar slow rates.

Artificial intelligence researchers have long searched for general 
principles to allow them to improve their programs.  They keep 
rediscovering the same few insights, and so they spend most of their time 
looking for more domain specific insights to help them improve more 
specific kinds of performance.  This is even embodies in the slogan that 
knowledge is the key - the main difference between smart and dumb systems 
is how many things they knows.  The more you know, the faster you can 
learn, but mostly what you learn are specific things.

You seem to be saying all our familiar experience as recursive businessmen, 
intellects, computer researchers, and AI programmers is misleading - that 
there remains a large pool of big general improvements, and there is a very 
different certain sort of path than a dumb AI could be placed on to find 
those improvements at a rapidly increasing pace.  Others don't see that 
path, but you do.  Virtually no established experts in related fields 
(i.e., economic growth, artificial intelligence, ...) see this path, or 
even recognize you as presenting an interesting different view they 
disagree with, even though you have for years explained it all in hundreds 
of pages of impenetrable prose, building very little on anyone else's 
closely related research, filled with terminology you invent.

Do you have any idea how arrogant that sounds?  Any idea how much it looks 
just like a crank? Are there no demonstration projects you could build as a 
proof of concept of your insights?  Wouldn't it be worth it to take the 
time to convince at least one or two people who are recognized established 
experts in the fields in which you claim to have new insight, so they could 
integrate you into the large intellectual conversation?



Robin Hanson  rhanson at gmu.edu  http://hanson.gmu.edu
Assistant Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-4444
703-993-2326  FAX: 703-993-2323 




More information about the extropy-chat mailing list