[ExI] Strong AI Hypothesis: logically flawed?

Anders Sandberg anders at aleph.se
Tue Sep 30 23:28:21 UTC 2014


Ohad Asor <ohadasor at gmail.com> , 30/9/2014 11:22 AM:

On Tue, Sep 30, 2014 at 11:59 AM, Anders Sandberg <anders at aleph.se> wrote:

The problem is that if the outcome space is not well defined, the entire edifice built on the Kolmogorov axioms crashes. In most models and examples we use the outcome space is well defined: ravens have colours. But what if I show you a raven whose colour was *fish*? (or colourless green?
Sure. That's why we speak about a great amount of variables. What if the input and output of our learner will be video and audio? I don't see any obstacle implementing it.
Combinatorial explosion. When you discover spatial coherence and the existence of objects a lot of things like video become learnable that otherwise would need exponentially large training sets. So the real issue is how to get the hierarchies and structures from the data; I assume you know about the work in the Josh Tannenbaum empire?
Apparently the video game playing reinforcement agent of Deep Mind somehow figured out object constancy for the aliens in the first wave of Space Invaders and could learn to play well, but got confused by the second wave since the aliens looked different - it didn't generalize the first aliens, and had to re-learn the game completely for that stage. 

 We got the mathematical promises. We seem to got enough computational power. We got plenty of algorithms. Just two things need to be solved: how to train this brain (it's apparently more difficult than all mentioned tasks), and, who will convince investors for such a project? :)
Deep Mind found investors :-)


Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20141001/6b8b4347/attachment.html>


More information about the extropy-chat mailing list