[ExI] Fwd: ai
Anders Sandberg
anders at aleph.se
Sun Mar 13 18:26:51 UTC 2016
On 2016-03-12 22:27, William Flynn Wallace wrote:
>
> What is the equivalent in AI? Are there instructions you can feed to
> one and it will fail to carry them out? Like HAL?
Failing to carry out instructions can happen because (1) they are not
understood or misunderstood, (2) the system decides not to do them, or
(3) the system "wants" to do them but finds itself unable.
For example, if you give the Wolfram Integrator a too hard problem it
will after a while time out (sometimes it detects the hardness by
inspection and stops, explaining that it thinks there is no reachable
solution). This is type 3 not doing stuff turning into type 2.
In principle a learning system might even be able to learn what it
cannot do, avoiding wasting time - but also potentially learning
helplessness when it shouldn't (I had a reinforcement learning agent
that decided the best action was to avoid doing anyting, since most
actions it did had bad consequences).
--
Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160313/753b36fd/attachment.html>
More information about the extropy-chat
mailing list