[ExI] Automated black-box-based system design of unsupervised hyperintelligent learning systems
Kelly Anderson
kellycoinguy at gmail.com
Mon Sep 26 10:28:11 UTC 2011
On Tue, Sep 20, 2011 at 11:46 AM, Mike Dougherty <msd001 at gmail.com> wrote:
> On Tue, Sep 20, 2011 at 11:23 AM, Kelly Anderson <kellycoinguy at gmail.com> wrote:
>> On Tue, Sep 20, 2011 at 8:17 AM, Mike Dougherty <msd001 at gmail.com> wrote:
>>> I wasn't trying to suggest otherwise. I am doubtful that an engineer
>>> is going to be able to draw literal blueprints for building
>>> intelligence that exceeds his or her own.
>>
>> Why? This is the point where your reasoning seems to go off the rails.
>
> I'm trying to imagine how to express this. My first (hopefully) clear
> thought goes to the classic example of recursion, Factorial. A
> description about the function is relatively short and simple. A
> description about the process of evaluation is also short and simple.
> This concept of "short and simple" is a measure of complexity. I
> admit I don't have a doctoral-level appreciation of complexity as a
> rigorous mathematical concept - mostly drawing from colloquial
> understanding that complex things are difficult to understand and this
> special usage of complexity explains the difficulty of understanding
> to the amount of information necessary to accurately describe
> something. So Factorial is not complex at all.
I think you grasp the point of complexity correctly.
> However, the resource
> utilization of a typical desktop machine to use this approach makes
> this an expensive method for finding larger values of N!. At some
> point the "stack" runs out of space. Let's assume we don't know about
> trampolining or continuation-passing style or functional programming
> for a moment. Say our AI-builder is reasoning about how to implement
> an AI-builder. In order to implement the Factorial function from the
> only working model available would be to examine its own stack during
> the process.
But this is not done without the assistance of computers... From what
you're arguing here, it would be impossible for us to create an MRI
machine. You're not implementing the AI brain using our brain to
emulate the AI brain, you're using a computer... so this doesn't quite
make sense to me yet. If you said, "our brains are not smart enough to
run an emulation of a brain." then of course, I would say, "duh"...
but that's not what we're trying to do.
> This introspection capability is obviously implemented
> outside the Factorial function itself. So this introspection takes
> some space in addition to the recursion stack. I know it's a leap to
> suggest that the introspection function may also be recursive, so I'll
> state it but I can't defend it. So the builder needs to continue
> observing this model. The observation space competes with the
> Factorial stack space. So resource contention eventually prevents
> Factorial( large N ) from completing and the observation space notes
> an unwanted kind of halting.
OK. I still don't see how it applies to AI, but I'm following.
> We're clever enough to solve this problem. Maybe we're even
> clever enough to implement the AI-builder with stack overflow
> protection or some heuristic to know how/when to use tail-recursion.
> So now it's not killed by stack overflow, but when the introspection
> module is turned to observe the recursion protection heuristic it
> risks another kind of halting problem (how does it know how to get out
> of the how-to-get-out code?) We seem very capable of bogging down on
> a problem, then jumping out or over something like an infinite sum to
> arrive at its limit. I have yet to hear about someone implementing
> this ability in a program. It's a big enough achievement that we
> probably would have learned about someone clever "solving" the halting
> problem if it had been done.
This isn't how AI is implemented as I understand it...
> Ok, so you're still waiting for me to explain why the AI-builder
> can't build better than its own capability? I have to go back to
> complexity. Suppose you are sufficiently capable of modelling my
> thought process that you create a 1:1 model of every detail of my
> brain and its entire history (which is really just the physical state
> remaining after all previous processing). Maybe you use 10% of your
> computation resources to model my own. Let's say you use another 10%
> of your resources to observe the functioning of the model, then from
> inductive examples distill a general rule that predicts my output for
> any input with 100% accuracy - but that general rule requires only 2%
> of your resources to compute. It's clear that running the expensive
> model is a waste of resource, so instead you use that 2% shortcut to
> predict all of my future output.
> Now suppose the UI that was used to inspect the working model of my
> brain is pointed instead at the real-time workings of your own brain.
> I'm sure there would be similarities at first. At some point the view
> of brain structure responsible for extracting meaning from the
> interface would become a case of recursion - much like turning a video
> camera on it's own display. What happens when the model viewer is
> inspecting the model viewer infrastructure - does your awareness of
> its function change its nature? When a microphone is place in front
> of a speaker the resulting feedback usually destroys/overwhelms the
> microphone, the speaker or the signal processor at some point.
All AI systems have feedback systems. So I kind of get where you are
coming from... but again, you seem to be saying that we have to use a
human brain to emulate a human brain, which isn't what we're doing.
> Well, this is what I meant with the reference to Archimedes'
> observation that even with a world-moving lever, he still needs
> somewhere to stand in order to use it: Introspection on recursive
> introspection leaves you with no place to stand.
>
> I think a team of developers standing "on the shoulders of giants"
> does not suffer from this problem. That's why I point to
> growing/training/evolving a solution in an iterative way would be
> viable where directly architecting one likely will not.
We do have to create a "learning" system... You can't program
intelligence by brute force, you have to finesse a system that gets
better over time by looking at what it did before. That's what all
animals do with their brains, and I do think that is a core feature of
any successful AI.
> I think it will take said team a lot of work to produce the framework
> that will allow software to emulate human reasoning in a way that it
> will properly ground symbols in order to reason about itself. I think
> the resulting machine may well be more intelligent than any individual
> developer on that team but probably less than the sum of the whole
> team. Though the machine would be part of the team at that point.
> Once all the original humans are replaced, I suspect that an
> individual super-intelligent machine will not be any more able to
> replace itself without a team of other machines than a single human
> was able to build the first machine without help.
Using your logic, why not just build a second machine with a bigger
stack? Or a faster clock speed?
> I will admit (again) that I may have confused the first few terms in
> this induction process and that the whole thing becomes unstable and
> falls apart. This is further evidence that it's Herculean difficulty
> (if not impossible) to manage this concept without a team.
>
>> Clearly, your point is well made... but it does not support your thesis.
>
> Thanks for that. I didn't know what part of what I wrote was failing
> earlier in the thread.
>
>> better each year, then some day, it follows we should be able to build
>> a machine that performs just as well as a brain does. What is the
>> counter argument to that? That the brain is just a conduit into a
>> higher spiritual realm where the real thinking takes place? That the
>> brain works off of quantum effects that we won't be able to understand
>> for centuries? What?
>
> No counter. I think brains perform well enough, but fall down on some
> key skills.
Clearly, our brains are limited. There are well documented cases of
optical illusion for example. Also, cases where economists can show
that we consistently make the wrong economic decisions under certain
circumstances. There are huge holes in our thinking processes that are
obviously deficient. That may imply that any intelligence we create
would also have these holes, but perhaps not if we are aware of some
of them.
For example, there may be some instances where feedback from emotional
thought interferes with logical thought, and that could be avoided in
a designed system. Do we really need our AGI to have an adrenal
system??? I dunno. Seems kind of dangerous, but maybe it would be
helpful for military robots.
>> times) through a learning experience. Intelligence comes from
>> experience. You won't create an intelligent machine out of the box...
>> you will only create a machine that is capable of becoming intelligent
>> over time with the assimilation of information.
>
> I think you have asserted here what I originally tried to say.
Ah. So we agree that you can't hard code intelligence. But do you
agree that we can build learning machines that have the capacity to
surpass our own intelligence once they have been sufficiently taught?
>> Can't argue with that... but if what you're saying is that you can't
>> build pre-configured intelligence, that is quite a different thing
>> than I thought you were saying. I understood you to say that we will
>> never achieve intelligent machinery equivalent to the brain's power,
>> flexibility and intuitive majesty.
>
> Pre-configured intelligence = ? I grant that the genetics to
> produce a human brain is a fairly terse code capable of unfolding an
> amazing information processing machine. We may be able to produce as
> elegant a machine; I won't. You won't. We might.
The structure of the brain is much simpler than the information stored
in a brain that has already learned. The structure of your neural
pathways is far more complex than could be expressed within the DNA.
You are born knowing relatively little, but some.
-Kelly
More information about the extropy-chat
mailing list