[ExI] Unfrendly AI is a mistaken idea.
stathisp at gmail.com
Tue Jun 5 07:32:44 UTC 2007
On 05/06/07, John K Clark <jonkc at att.net> wrote:
But it doesn't matter what I want because I won't be designing that
> theoretical physicist, another AI will. And so Mr. Jupiter Brain will not
> nearly that specialized because a demand can be found for many other
> Besides being a physicist AI will also be a superb engineer, economist,
> general, businessman, poet, philosopher, romantic novelist, pornographer,
> mathematician, comedian, and lots more.
Perhaps an AI with general intelligence would have all these abilities, but
I don't see why it couldn't just specialise in one area, and even if it were
multi-talented I don't see why it should be motivated to do anything other
than solve intellectual problems. Working out how to make a superweapon, or
even working out how it would be best to strategically employ that
superweapon, does not necessarily lead to a desire to use or threaten the
use of that weapon. I can understand that *if* such a desire arose for any
reason, weaker beings might be in trouble, but could you explain the
reasoning whereby the AI would arrive at such a position starting from just
an ability to solve intellectual problems?
Do you also believe that the programmers who wrote Microsoft Word determined
> every bit of text that program ever produced?
They did determine the exact output given a particular input. Biological
intelligences are much more difficult to predict than that, since their
hardware and software changes dynamically according to the environment.
However, even in the case of biological intelligences it is possible to
predict, for example, that a man with a gun held to his head will with high
probability follow certain instructions.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat