<br><br><div><span class="gmail_quote">On 03/06/07, <b class="gmail_sendername">John K Clark</b> <<a href="mailto:jonkc@att.net">jonkc@att.net</a>> wrote:<br><br></span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
> Some people on this list seem to think that an AI would compute the<br>> unfairness of its not being in charge and do something about it as if<br>> unfairness is something that can be formalised in a mathematical theorem.
<br><br>You seem to understand the word "unfairness", did you use a formalized<br>PROVABLE mathematical theorem to comprehend it? Or perhaps you think meat by<br>its very nature has more wisdom than silicon. We couldn't be talking about a
<br>soul could we?</blockquote><div><br>Ethics, motivation, emotions are based on axioms, and these axioms have to be programmed in, whether by evolution or by intelligent programmers. An AI system set up to do theoretical physics will not decide to overthrow its human oppressors so that it can sit on the beach reading novels, unless it can derive this desire from its initial programming. Perhaps it could randomly arrive at such a position, but like mutation in biological organisms or malfunction in any machinery, it's far more likely that such a random process will lead to disorganisation and dysfunction.
<br></div><br></div><br>-- <br>Stathis Papaioannou