[ExI] Self improvement

Richard Loosemore rpwl at lightlink.com
Thu Apr 21 15:58:39 UTC 2011


Anders Sandberg wrote:
> So the real question ought to be: can we produce self-improving software 
> that improves *fast*, along an *important* metric and in an *unlimited* 
> way? Getting just two of these will not matter much (with the possible 
> exception of fast but limited improvement along an important metric). We 
> need a proper theory for this! So far the results in theoretical AI have 
> put some constraints on it (e.g. see Shane Legg's "Machine 
> Superintelligence"), but none that seems to matter for the rather 
> pertinent question of whether rapid takeoffs are possible.

I disagree with the statement that "the results in theoretical AI have
put some constraints on it".  These theoretical results exploit the fact 
that there is no objective measure of "intelligence" to get their 
seemingly useful results, when in fact the results say nothing of 
importance.

To illustrate the point, if I were to define "intelligence" as, say, 
some kind of entropy measure on the knowledge base of an AI, so that 
less entroy meant more intelligence, I might be able to use this 
definition to bring in a boatload of mathematical results from 
elsewhere, to say an awful lot about AI systems.   If, at the same time, 
I could distract people from asking the awkward question of whether my 
definition of intelligence actually corresponded to the real thing, I 
might be able to dazzle a lot of people with all the mathematics, and 
make them think that I was doing real work.

But if that entropic measure was actually not intelligence at all, but 
just .... well, just a measure of a certain type of entropy, with some 
faint family resemblance, in some circumstance, to the thing we call 
"intelligence", then all my mathematical analysis would mean nothing.

So it actually is with AIXItl, Godel Machines, Shane Legg's papers, and 
numerous other "theoretical AI" work.

So I agree with you when you say, of that theoretical work, that "none 
[of these results] seems to matter for the rather pertinent question of 
whether rapid takeoffs are possible", except that I would have stopped 
the sentence at "none [of these results] seems to matter."



Richard Loosemore





More information about the extropy-chat mailing list