[extropy-chat] Fundamental limits on the growth rateofsuperintelligences

John K Clark jonkc at att.net
Tue Feb 14 18:16:54 UTC 2006


"kevinfreels.com" <kevin at kevinfreels.com>

> I think it's a long stretch to think that any AI would be perfect and not
> make mistakes.

An AI will not be perfect, but it doesn't need to be to overcome us; all it
needs is to think several million times faster than we do and have the
ability to add to its brain hardware virtually without limit. If it does
that it will be running things not us.

> You simply can't know their minds or motivations once they
> become independent.

One thing you can be certain of however, they will prefer existence to non
existence, otherwise they wouldn't exist for very long. So they won't be
happy if we try to kill them and will takes steps to prevent it, probably
with extreme prejudice.

> Some may even go flat-out crazy.

If so that doesn't bode well for the continued existence of the human race.

> Also, you are assuming that the AI has nothing better to do with it's time
> than to improve upon itself. It may very well become so interested in
> observing it may never choose to do anything but observe. The AI version
> of the couch potato

Some defective AIs may become slackers, but they aren't the ones that will
grow into Jupiter Brains.

  John K Clark






More information about the extropy-chat mailing list