[ExI] Automated black-box-based system design of unsupervised hyperintelligent learning systems

Mike Dougherty msd001 at gmail.com
Tue Sep 20 17:46:48 UTC 2011


On Tue, Sep 20, 2011 at 11:23 AM, Kelly Anderson <kellycoinguy at gmail.com> wrote:
> On Tue, Sep 20, 2011 at 8:17 AM, Mike Dougherty <msd001 at gmail.com> wrote:
>> I wasn't trying to suggest otherwise.  I am doubtful that an engineer
>> is going to be able to draw literal blueprints for building
>> intelligence that exceeds his or her own.
>
> Why? This is the point where your reasoning seems to go off the rails.

I'm trying to imagine how to express this.  My first (hopefully) clear
thought goes to the classic example of recursion, Factorial.  A
description about the function is relatively short and simple.  A
description about the process of evaluation is also short and simple.
This concept of "short and simple" is a measure of complexity.  I
admit I don't have a doctoral-level appreciation of complexity as a
rigorous mathematical concept - mostly drawing from colloquial
understanding that complex things are difficult to understand and this
special usage of complexity explains the difficulty of understanding
to the amount of information necessary to accurately describe
something.  So Factorial is not complex at all.  However, the resource
utilization of a typical desktop machine to use this approach makes
this an expensive method for finding larger values of N!.  At some
point the "stack" runs out of space.  Let's assume we don't know about
trampolining or continuation-passing style or functional programming
for a moment.  Say our AI-builder is reasoning about how to implement
an AI-builder.  In order to implement the Factorial function from the
only working model available would be to examine its own stack during
the process.  This introspection capability is obviously implemented
outside the Factorial function itself.  So this introspection takes
some space in addition to the recursion stack.  I know it's a leap to
suggest that the introspection function may also be recursive, so I'll
state it but I can't defend it.   So the builder needs to continue
observing this model.  The observation space competes with the
Factorial stack space.  So resource contention eventually prevents
Factorial( large N ) from completing and the observation space notes
an unwanted kind of halting.
    We're clever enough to solve this problem.  Maybe we're even
clever enough to implement the AI-builder with stack overflow
protection or some heuristic to know how/when to use tail-recursion.
So now it's not killed by stack overflow, but when the introspection
module is turned to observe the recursion protection heuristic it
risks another kind of halting problem (how does it know how to get out
of the how-to-get-out code?)  We seem very capable of bogging down on
a problem, then jumping out or over something like an infinite sum to
arrive at its limit.  I have yet to hear about someone implementing
this ability in a program.  It's a big enough achievement that we
probably would have learned about someone clever "solving" the halting
problem if it had been done.

  Ok, so you're still waiting for me to explain why the AI-builder
can't build better than its own capability?  I have to go back to
complexity.  Suppose you are sufficiently capable of modelling my
thought process that you create a 1:1 model of every detail of my
brain and its entire history (which is really just the physical state
remaining after all previous processing).  Maybe you use 10% of your
computation resources to model my own.  Let's say you use another 10%
of your resources to observe the functioning of the model, then from
inductive examples distill a general rule that predicts my output for
any input with 100% accuracy - but that general rule requires only 2%
of your resources to compute.  It's clear that running the expensive
model is a waste of resource, so instead you use that 2% shortcut to
predict all of my future output.
  Now suppose the UI that was used to inspect the working model of my
brain is pointed instead at the real-time workings of your own brain.
I'm sure there would be similarities at first.  At some point the view
of brain structure responsible for extracting meaning from the
interface would become a case of recursion - much like turning a video
camera on it's own display.  What happens when the model viewer is
inspecting the model viewer infrastructure - does your awareness of
its function change its nature?  When a microphone is place in front
of a speaker the resulting feedback usually destroys/overwhelms the
microphone, the speaker or the signal processor at some point.

Well, this is what I meant with the reference to Archimedes'
observation that even with a world-moving lever, he still needs
somewhere to stand in order to use it:  Introspection on recursive
introspection leaves you with no place to stand.

I think a team of developers standing "on the shoulders of giants"
does not suffer from this problem.  That's why I point to
growing/training/evolving a solution in an iterative way would be
viable where directly architecting one likely will not.

I think it will take said team a lot of work to produce the framework
that will allow software to emulate human reasoning in a way that it
will properly ground symbols in order to reason about itself.  I think
the resulting machine may well be more intelligent than any individual
developer on that team but probably less than the sum of the whole
team.  Though the machine would be part of the team at that point.
Once all the original humans are replaced, I suspect that an
individual super-intelligent machine will not be any more able to
replace itself without a team of other machines than a single human
was able to build the first machine without help.

I will admit (again) that I may have confused the first few terms in
this induction process and that the whole thing becomes unstable and
falls apart.  This is further evidence that it's Herculean difficulty
(if not impossible) to manage this concept without a team.

> Clearly, your point is well made... but it does not support your thesis.

Thanks for that.  I didn't know what part of what I wrote was failing
earlier in the thread.

> better each year, then some day, it follows we should be able to build
> a machine that performs just as well as a brain does. What is the
> counter argument to that? That the brain is just a conduit into a
> higher spiritual realm where the real thinking takes place? That the
> brain works off of quantum effects that we won't be able to understand
> for centuries? What?

No counter.  I think brains perform well enough, but fall down on some
key skills.

> times) through a learning experience. Intelligence comes from
> experience. You won't create an intelligent machine out of the box...
> you will only create a machine that is capable of becoming intelligent
> over time with the assimilation of information.

I think you have asserted here what I originally tried to say.

> Can't argue with that... but if what you're saying is that you can't
> build pre-configured intelligence, that is quite a different thing
> than I thought you were saying. I understood you to say that we will
> never achieve intelligent machinery equivalent to the brain's power,
> flexibility and intuitive majesty.

Pre-configured intelligence = ?    I grant that the genetics to
produce a human brain is a fairly terse code capable of unfolding an
amazing information processing machine.  We may be able to produce as
elegant a machine; I won't. You won't. We might.




More information about the extropy-chat mailing list