[ExI] Automated black-box-based system design of unsupervised hyperintelligent learning systems
Mike Dougherty
msd001 at gmail.com
Mon Sep 19 22:22:54 UTC 2011
On Mon, Sep 19, 2011 at 11:58 AM, spike <spike66 at att.net> wrote:
>>... On Behalf Of Mike Dougherty
>>...You can't build something smarter than yourself...
>
> Indeed? It depends on how one defines the term smart. Those who wrote
> chess software reached a point where their own creations could beat the
> pants off of them. I myself wrote a Sudoku solver which can solve a couple
> standard 3x3x3 puzzles per second. That software can solve not only 3x3x3s
> but higher order puzzles up through 14x14x14 sudokus. I sure couldn't do
> that in a reasonable amount of time.
>
> Software isn't smarter than us at everything, but it is smarter than us at
> an ever-growing collection of specific tasks.
everything depends on how we define terms, of course. In this case I
was talking about the general complexity of a "built" or "engineered"
intelligence. Of course I can write a program to do maths faster than
I can do maths, but that doesn't make the program smart at all - let's
say that I have introspected some of my own smarts, then encapsulated
that algorithm in a recipe (thanks dan_ust for using the term first).
I/we can do this because the complexity of the program is such that we
have extra capacity for complexity from which to reason about the
nature of the program. This reminds me of the Archimedes quote, 'If
you give me a lever and a place to stand, I can move the world." I
think the concept of the lever is straightforward enough, but the
place to stand in order to use it is the key to Archimedes quandary.
I believe this more succinctly demonstrates the problem of building an
intelligence greater than that of the builder's own.
I specifically left evolution as an example of achieving growth beyond
the builder's intelligence. I agree with Dan that evolution is a
fairly haphazard approach to growth, but it does seem to pay dividends
in the long-term. GA's are slow. Perhaps that's a feature rather
than a flaw. Perhaps there is some random jumble of intelligence
blocks that crystallize like Penrose tiles which will suddenly produce
a non-linear jump in machine intelligence. We're still far short of
knowing the tiling rules to predict how or when that might next
happen. (may it'll involve a monolith with dimensions of increasing
squares? :)
David, re: parent's children being smarter - It's the cliche dichotomy
of nature vs. nurture. Does a formal system train more or less
equivalent wetware to perform at a higher level or is there something
inherent to a better brain? If you believe better brains are based on
genetics, then super-intelligence is a matter of isolating the best
genes - right? If training, then what is wrong with the current
training regimen that we're short of our potential? Are we short of
it? I hope we are. I'm not sure if this topic has been done to
death here, though I haven't seen it in the last 3 years if it was
done long ago.
More information about the extropy-chat
mailing list