[ExI] Unfrendly AI is a mistaken idea.
eugen at leitl.org
Tue Jun 5 09:03:15 UTC 2007
On Tue, Jun 05, 2007 at 05:32:44PM +1000, Stathis Papaioannou wrote:
> Perhaps an AI with general intelligence would have all these
By definition. That's the 'general' part.
> abilities, but I don't see why it couldn't just specialise in one
> area, and even if it were multi-talented I don't see why it should be
It is not important what most things on a population do, but what just
one does, if it's relevant.
> motivated to do anything other than solve intellectual problems.
Remember, one is enough.
> Working out how to make a superweapon, or even working out how it
> would be best to strategically employ that superweapon, does not
> necessarily lead to a desire to use or threaten the use of that
I guess I don't have to worry about crossing a busy street a few times
without looking, since it doesn't necessarily lead to me being dead.
> weapon. I can understand that *if* such a desire arose for any reason,
> weaker beings might be in trouble, but could you explain the reasoning
> whereby the AI would arrive at such a position starting from just an
> ability to solve intellectual problems?
Could you explain how an AI would emerge with merely an ability to
solve intellectual problems? Because, it would run contrary to all
the intelligent hardware already cruising the planet.
> Do you also believe that the programmers who wrote Microsoft Word
> every bit of text that program ever produced?
> They did determine the exact output given a particular input.
No, only in the regression tests. If they did, bugs wouldn't exist.
> Biological intelligences are much more difficult to predict than that,
> since their hardware and software changes dynamically according to the
Conventional discrete logic can emulate any connectivity and change
state quite nicely. In fact, if you want to do it quickly, you move
electrons, not atoms. Especially, large hydrated biopolymers.
> environment. However, even in the case of biological intelligences it
> is possible to predict, for example, that a man with a gun held to his
> head will with high probability follow certain instructions.
Heh. People never panick nor act according to a wrong model of the
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
More information about the extropy-chat