[ExI] Fwd: ai

John Clark johnkclark at gmail.com
Mon Mar 14 21:40:56 UTC 2016

On Mon, Mar 14, 2016  Anders Sandberg <anders at aleph.se> wrote:

​> ​
> Again, I am not talking about perfect performance. It is enough for Siri
> to recognize what kind of questions it is unlikely to give a satisfactory
> answer to (essentially just a supervised learning problem of mapping
> questions to user satisfaction) and say it.

​In other words sometimes the AI will get bored with humans telling it what
to do and ignore orders.​

> ​> ​
> It will sometimes make mistakes, but that is practically OK.

​That depends on the type of mistakes, if the AI makes the mistake ​NEVER
getting bored and NEVER disobeying there will be practical consequences
that are not OK.

> ​> ​
> Note the huge gulf between mathematical in-principle arguments and actual
> computational feasibility.

​Sometimes there is no way to tell beforehand
​how difficult a task will be, ​it might be easy, it might be
extraordinarily difficult, it might be absolutely impossible. The AI must
make a judgement call on when to give up, and sometimes its judgement will
be wrong, but that's the way it goes.

 John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160314/fa9567f1/attachment.html>

More information about the extropy-chat mailing list