[ExI] Strong AI Hypothesis: logically flawed?

Ohad Asor ohadasor at gmail.com
Tue Sep 30 09:17:02 UTC 2014


On Tue, Sep 30, 2014 at 11:59 AM, Anders Sandberg <anders at aleph.se> wrote:

>
> The problem is that if the outcome space is not well defined, the entire
> edifice built on the Kolmogorov axioms crashes. In most models and examples
> we use the outcome space is well defined: ravens have colours. But what if
> I show you a raven whose colour was *fish*? (or colourless green?
>



Sure. That's why we speak about a great amount of variables. What if the
input and output of our learner will be video and audio? I don't see any
obstacle implementing it. We got the mathematical promises. We seem to got
enough computational power. We got plenty of algorithms. Just two things
need to be solved: how to train this brain (it's apparently more difficult
than all mentioned tasks), and, who will convince investors for such a
project? :)

Generalization error bounds, VC Dimension etc. are unfortunately not
informative in Google search. An informative example I quickly found online
is here <https://cs.uwaterloo.ca/~shai/Chapters_4_CS886.pdf>, see corollary
1 on page 27.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20140930/0bb3f08a/attachment.html>


More information about the extropy-chat mailing list