[ExI] Self improvement

Anders Sandberg anders at aleph.se
Thu Apr 21 08:42:16 UTC 2011


It is easy to see an example of self-improving piece of software. Take 
an optimizing compiler's source code, compile it with optimization using 
a version that was not compiled in an optimized way, and voila! You have 
self-improvement. Except that it stops there.

It is also easy to make a piece of software that can self-improve by any 
measurable metric: just generate random variations, measure how well 
they do, and select the best. This works, but tends to be completely 
impractical.

In the literature we have examples such as AIXI(tl) (potentially 
unlimited, but in practice too slow) and Gödel machines (definitely 
self-improving, implementable, but likely too slow to matter and 
possibly limited by what it can prove).

So the real question ought to be: can we produce self-improving software 
that improves *fast*, along an *important* metric and in an *unlimited* 
way? Getting just two of these will not matter much (with the possible 
exception of fast but limited improvement along an important metric). We 
need a proper theory for this! So far the results in theoretical AI have 
put some constraints on it (e.g. see Shane Legg's "Machine 
Superintelligence"), but none that seems to matter for the rather 
pertinent question of whether rapid takeoffs are possible.

-- 
Anders Sandberg,
Future of Humanity Institute 
James Martin 21st Century School 
Philosophy Faculty 
Oxford University 




More information about the extropy-chat mailing list