[ExI] Unfrendly AI is a mistaken idea.

Stathis Papaioannou stathisp at gmail.com
Sun Jun 3 05:19:31 UTC 2007


On 03/06/07, John K Clark <jonkc at att.net> wrote:

> Some people on this list seem to think that an AI would compute the
> > unfairness of its not being in charge and do something about it as if
> > unfairness is something that can be formalised in a mathematical
> theorem.
>
> You seem to understand the word "unfairness", did you use a formalized
> PROVABLE mathematical theorem to comprehend it? Or perhaps you think meat
> by
> its very nature has more wisdom than silicon. We couldn't be talking about
> a
> soul could we?


Ethics, motivation, emotions are based on axioms, and these axioms have to
be programmed in, whether by evolution or by intelligent programmers. An AI
system set up to do theoretical physics will not decide to overthrow its
human oppressors so that it can sit on the beach reading novels, unless it
can derive this desire from its initial programming. Perhaps it could
randomly arrive at such a position, but like mutation in biological
organisms or malfunction in any machinery, it's far more likely that such a
random process will lead to disorganisation and dysfunction.


-- 
Stathis Papaioannou
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070603/c6a5428c/attachment.html>


More information about the extropy-chat mailing list