[ExI] Fwd: ai
Anders Sandberg
anders at aleph.se
Mon Mar 14 07:47:49 UTC 2016
Again, I am not talking about perfect performance. It is enough for Siri
to recognize what kind of questions it is unlikely to give a
satisfactory answer to (essentially just a supervised learning problem
of mapping questions to user satisfaction) and say it. It will sometimes
make mistakes, but that is practically OK. Note the huge gulf between
mathematical in-principle arguments and actual computational feasibility.
On 2016-03-13 22:32, John Clark wrote:
> On Sun, Mar 13, 2016 at 2:26 PM, Anders Sandberg <anders at aleph.se
> <mailto:anders at aleph.se>>wrote:
>
> >
> In principle a learning system might even be able to learn what it
> cannot do, avoiding wasting time
>
>
> But Turing tells us in general that can not be done. Maybe looking
> for a example to prove that the
> Goldbach conjecture
> is wrong is a waste of time and maybe it's not, and maybe looking
> for a proof that
> Goldbach
> is right is a waste of time and maybe it's not; and maybe Goldbach
> is true but not provable so both things are a waste of time. T
> hat is why a AI, or any form of intelligence, needs both the ability
> to get bored and the ability to disobey an order.
>
> John K Clark
>
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
--
Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160314/2cbad596/attachment.html>
More information about the extropy-chat
mailing list