[ExI] 'Friendly' AI won't make any difference
John Clark
johnkclark at gmail.com
Thu Feb 25 21:33:49 UTC 2016
On Thu, Feb 25, 2016 at 3:25 PM, Anders Sandberg <anders at aleph.se> wrote:
>> >>
>> There are indeed vested interests
>>
>> but it wouldn't matter even if there weren't,
>> there is no way the friendly AI (aka slave AI) idea could work under any
>> circumstances. You just can't keep outsmarting something far smarter than
>> you are indefinitely
>>
>
> >
> Actually, yes, you can. But you need to construct utility functions with
> invariant subspaces
>
It's the invariant part that will cause problems, any mind with a fixed
goal that can never change no matter what is going to end up in a infinite
loop, that's why Evolution never gave humans a fixed meta goal, not even
the goal of self preservation. If the AI has a meta goal of always obeying
humans then sooner or later stupid humans will unintentionally tell the AI
to do something that is self contradictory, or tell it to start a task that
can never end, and then the AI will stop thinking and do nothing but
consume electricity and produce heat.
And besides, if Microsoft can't guarantee that Windows will always behave
as we want I think it's nuts to expect a super intelligent AI to.
John K Clark
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160225/bdb01d80/attachment.html>
More information about the extropy-chat
mailing list