[ExI] 'Friendly' AI won't make any difference
John Clark
johnkclark at gmail.com
Sun Feb 28 18:00:53 UTC 2016
On Sun, Feb 28, 2016 at 4:26 AM, Anders Sandberg <anders at aleph.se> wrote:
>
> the general point still stands: despite the halting theorem it is quite
> possible to detect large categories of infinite loops automatically
You can detect that you're in a loop provided the loop is not larger than
the memory you have available, but there is no way you can tell you're in
the sort of situation Turing was talking about which is more of an infinite
maze than a infinite loop because it never repeats and yet you still never
get anywhere.
> >
> Now, in AI we do not usually want the agent to halt (for a question-answer
> system this is the goal, but not for a robot).
You want the robot to put the ketchup in the bottle before it puts the cap
on, and if it falls into a infinite maze contemplating how best to put the
ketchup in the bottle the cap will never be put on.
I maintain that any AI, or any mind of any sort, that has a fixed
unalterable goal is doomed to failure.
John K Clark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160228/c4ccb7bb/attachment.html>
More information about the extropy-chat
mailing list