[ExI] Automated black-box-based system design of unsupervised hyperintelligent learning systems
Anders Sandberg
anders at aleph.se
Wed Sep 21 10:37:16 UTC 2011
Amara D. Angelica wrote:
>
> Does anyone know of a system for automated black-box-based system
> design of unsupervised intelligent hyperintelligent learning systems?
>
What about
http://www.idsia.ch/~juergen/goedelmachine.html
http://www.hutter1.net/ai/aixigentle.htm
At least AIXI is hyperintelligent; not sure about whether there is any
proof that the Gödel machine will get there. AIXItl has been implemented
for real, but is of course slow enough that we do not need to worry
about it triggering a singularity on this side of the heat death of the
universe. Which is good, since it is not hard to prove that it is a
fairly unfriendly AI for most utility functions.
> Now how long do you think it would take such a system to
> reverse-engineer human intelligence and function and pass a more
> sophisticated version of the Turing test based on interacting with
> humans in RT/RL (real time/real life), initially at say, fly level,
> then cat level, then working up to human genius level and passing it,
> outputting advanced versions of itself.
>
There are some theorems for AIXI and related Solomonoff learners showing
that they can learn to predict inputs amazingly well amazingly quickly -
the total number of bits wrong in an infinite sequence is bounded, and
looks fairly low. The bad news is that pure AIXI is not implementable in
this universe :-)
In practice I suspect the answer is "fairly slowly". You need to see
example outputs due to visiting many parts of the internal state space,
likely several times. That means that for a human even an optimal
learner will likely need *a lot* of data. I have been thinking of
turning this into a serious research project sometime, analysing exactly
what the theory of Markov chains and similar structures tell us about
the feasibility to infer the cognitive structure of intelligent
creatures. My intuition is that it is not feasible, but it needs to be
turned into a stringent proof.
(Loose argument: there are 1e15 synapses in the brain, we need ~36 bits
per synapse to encode where they connect. Plus a few bits of synaptic
strength etc. So the information needed to describe a brain is of the
order of 4e16 bits. If a bit of information allows us to narrow down
connectivity by half, then we would just need this number of bits to
determine the brain - if we get 12.7 megabit per second we might be able
to do it in a century. This is already tricky, since we do not generate
that much external information per second (consider when we are asleep).
Worse, most of the activity and memories in our brains are hidden or
does not recur often, so we can not refine our model as strongly as
assumed above. It might turn out that our regular behavior is easy to
turn into chatbots on the other hand...)
>
> And is anyone developing something like this, or do I have to take off
> a few days and develop it myself? OK, a few centuries.... I haven't
> found anything on this except for Anders Sandberg's highly imaginative
> "Think Before Asking,"
> http://eclipsephase.com/downloads/ThinkBeforeAsking.pdf, which touches
> on some aspects of this.
>
Thanks for mentioning it. We (Stuart Armstrong and me) are presenting
the paper it was based on at a philosophy and AI conference in
Thessaloniki in two weeks.
And anybody who says you cannot make something smarter than yourself has
been wrong since Arthur Samuel's checker program. :-)
--
Anders Sandberg,
Future of Humanity Institute
James Martin 21st Century School
Philosophy Faculty
Oxford University
More information about the extropy-chat
mailing list