[ExI] Strong AI Hypothesis: logically flawed?

Ohad Asor ohadasor at gmail.com
Wed Oct 1 00:58:33 UTC 2014

On Wed, Oct 1, 2014 at 2:28 AM, Anders Sandberg <anders at aleph.se> wrote:

> Combinatorial explosion. When you discover spatial coherence and the
> existence of objects a lot of things like video become learnable that
> otherwise would need exponentially large training sets. So the real issue
> is how to get the hierarchies and structures from the data; I assume you
> know about the work in the Josh Tannenbaum empire?
> Apparently the video game playing reinforcement agent of Deep Mind somehow
> figured out object constancy for the aliens in the first wave of Space
> Invaders and could learn to play well, but got confused by the second wave
> since the aliens looked different - it didn't generalize the first aliens,
> and had to re-learn the game completely for that stage.

Contemporary learning algorithms are even O(1) wrt number of variables. An
example that jumps to my mind is PEGASOS SVM. It is a shallow learner,
though. I don't recall an example for deep learning right now, but even I
have developed some deep learning neural networks, some even fully
connected, with training time of O(n) per iteration wrt number of
connections, and optimization convergence rate of O(n^-2) wrt number of
iterations. Of course, all learnable wrt PAC Learning theory. Yann Le'Cunn,
which I mentioned earlier, also demonstrated extraordinary results with
deep neural networks.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20141001/39829b48/attachment.html>

More information about the extropy-chat mailing list