<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
On 2016-03-12 22:27, William Flynn Wallace wrote:<br>
<blockquote
cite="mid:CAO+xQEbfsm+XmrkuAvreSNKFV8M+xdg4dOOwRG-5t5RNeDxv2A@mail.gmail.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html;
charset=windows-1252">
<div dir="ltr">
<div class="gmail_quote">
<div dir="ltr"><br>
<div style="font-family:comic sans
ms,sans-serif;font-size:small;color:#000000">What is the
equivalent in AI? Are there instructions you can feed to
one and it will fail to carry them out? Like HAL?</div>
</div>
</div>
</div>
</blockquote>
<br>
Failing to carry out instructions can happen because (1) they are
not understood or misunderstood, (2) the system decides not to do
them, or (3) the system "wants" to do them but finds itself unable.
<br>
<br>
For example, if you give the Wolfram Integrator a too hard problem
it will after a while time out (sometimes it detects the hardness by
inspection and stops, explaining that it thinks there is no
reachable solution). This is type 3 not doing stuff turning into
type 2. <br>
<br>
In principle a learning system might even be able to learn what it
cannot do, avoiding wasting time - but also potentially learning
helplessness when it shouldn't (I had a reinforcement learning agent
that decided the best action was to avoid doing anyting, since most
actions it did had bad consequences). <br>
<br>
<pre class="moz-signature" cols="72">--
Anders Sandberg
Future of Humanity Institute
Oxford Martin School
Oxford University</pre>
</body>
</html>