[ExI] Self improvement
Eugen Leitl
eugen at leitl.org
Thu Apr 21 10:03:42 UTC 2011
On Thu, Apr 21, 2011 at 09:42:16AM +0100, Anders Sandberg wrote:
> It is easy to see an example of self-improving piece of software. Take
> an optimizing compiler's source code, compile it with optimization using
> a version that was not compiled in an optimized way, and voila! You have
> self-improvement. Except that it stops there.
Ah, but it doesn't actually enhance your program. Apart from making it
run faster, which is a trivial modification. Applied on itself, it
doesn't extend the language. It won't rewrite itself or your code
to make use of new hardware features, e.g. parallelism, detected
at runtime (yes, there's some OpenCL JIT which kinda, sorta
does it, but not really).
> It is also easy to make a piece of software that can self-improve by any
> measurable metric: just generate random variations, measure how well
> they do, and select the best. This works, but tends to be completely
> impractical.
>
> In the literature we have examples such as AIXI(tl) (potentially
> unlimited, but in practice too slow) and Gödel machines (definitely
> self-improving, implementable, but likely too slow to matter and
> possibly limited by what it can prove).
>
> So the real question ought to be: can we produce self-improving software
> that improves *fast*, along an *important* metric and in an *unlimited*
> way? Getting just two of these will not matter much (with the possible
> exception of fast but limited improvement along an important metric). We
> need a proper theory for this! So far the results in theoretical AI have
I am willing to bet good money that there is none. You can use
it as a diagnostic: whenever the system starts doing something
interesting your analytical approaches start breaking down.
Consider an oscillator. A system of coupled oscillators. A system
of coupled oscillators using positive-feedback loops. A system
of coupled oscillators using negative-feedback loops. A system
of oscillators using positive-feedback *and* negative-feedback
loops. Oops.
Consider formal proofs. It's an infinitely powerful tool, unfortunately
of infinitesimal reach.
The interesting part is that people continue to be in awe and in
great expectation of analystical methods, just because they
have been extremely effective in some areas. Which is pretty
unreasonable.
As long as we continue to treat artificial intelligence as
a scientific domain instead of "merely" engineering, we won't
be making progress.
> put some constraints on it (e.g. see Shane Legg's "Machine
> Superintelligence"), but none that seems to matter for the rather
> pertinent question of whether rapid takeoffs are possible.
Of course rapid takeoffs are possible. I can draw you a blueprint
of hardware which will accelerate neural processes by a factor
of at least 10^6. The physical limit is somewhere at 10^9.
A human civilisation will produce something interesting in a
megayear, even if the rate of matter manipulation is limited.
And of course you can just look at differences in really
effective ultra-high IQ individuals and merely us at
arbitrary detail (virtual systems are arbitrarily inspectable),
and figure out what the relevant delta is, and instantiate
more of these for a particular task.
All of this is engineering.
--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
More information about the extropy-chat
mailing list