<br><br><div><span class="gmail_quote">On 05/06/07, <b class="gmail_sendername">Eugen Leitl</b> <<a href="mailto:eugen@leitl.org">eugen@leitl.org</a>> wrote:<br><br></span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
> Working out how to make a superweapon, or even working out how it<br>> would be best to strategically employ that superweapon, does not<br>> necessarily lead to a desire to use or threaten the use of that
<br><br>I guess I don't have to worry about crossing a busy street a few times<br>without looking, since it doesn't necessarily lead to me being dead.<br><br>> weapon. I can understand that *if* such a desire arose for any reason,
<br>> weaker beings might be in trouble, but could you explain the reasoning<br>> whereby the AI would arrive at such a position starting from just an<br>> ability to solve intellectual problems?<br><br>
Could you explain how an AI would emerge with merely an ability to<br>solve intellectual problems? Because, it would run contrary to all<br>the intelligent hardware already cruising the planet.</blockquote><div><br>You can't argue that an intelligent agent would *necessarily* behave the same way people would behave in its place, as opposed to the argument that it *might* behave that way. Is there anything logically inconsistent in a human scientist figuring out how to make a weapon because it's an interesting intellectual problem, but then not going on to use that knowledge in some self-serving way? That is, does the scientist's intended motive have any bearing whatsoever on the validity of the science, or his ability to think clearly?
<br></div><br></div><br clear="all"><br>-- <br>Stathis Papaioannou