[ExI] Unfrendly AI is a mistaken idea.
stathisp at gmail.com
Tue Jun 5 11:30:00 UTC 2007
On 05/06/07, Eugen Leitl <eugen at leitl.org> wrote:
> Working out how to make a superweapon, or even working out how it
> > would be best to strategically employ that superweapon, does not
> > necessarily lead to a desire to use or threaten the use of that
> I guess I don't have to worry about crossing a busy street a few times
> without looking, since it doesn't necessarily lead to me being dead.
> > weapon. I can understand that *if* such a desire arose for any
> > weaker beings might be in trouble, but could you explain the
> > whereby the AI would arrive at such a position starting from just an
> > ability to solve intellectual problems?
> Could you explain how an AI would emerge with merely an ability to
> solve intellectual problems? Because, it would run contrary to all
> the intelligent hardware already cruising the planet.
You can't argue that an intelligent agent would *necessarily* behave the
same way people would behave in its place, as opposed to the argument that
it *might* behave that way. Is there anything logically inconsistent in a
human scientist figuring out how to make a weapon because it's an
interesting intellectual problem, but then not going on to use that
knowledge in some self-serving way? That is, does the scientist's intended
motive have any bearing whatsoever on the validity of the science, or his
ability to think clearly?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat