[ExI] Paul vs Eliezer

Rafal Smigrodzki rafal.smigrodzki at gmail.com
Wed Apr 6 03:32:28 UTC 2022

On Tue, Apr 5, 2022 at 12:16 AM Jason Resch via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> Rafal,
> You raise many interesting questions.
> I think when it comes to motivational questions, a general and
> self-improving intelligence, will devote some resources to attempt to
> learn, adapt and grow, as otherwise it will eventually be outclassed by
> super intelligneces that do these things.

### You are correct in the situation where competing AIs exist but the
question we tried to address was the relative likelihood of dangerous vs.
merely confused AI emerging from our attempts at creating the AI. I suggest
that the first AIs we make will be weird and ineffectual but some could be
dangerous to various degrees. If the first of these weird or dangerous AIs
gains the ability to preempt the creation of additional independent AIs
then the outcome could be something vastly different from what you
outlined, locked in until an alien AI shows up and this alien AI could be
also quite weird. In the absence of competition between AIs there is high
likelihood of very unusual, deviant AI, since it is the Darwinian (or
Lamarckian) competition that weeds out the weirdos.

One-off creatures are immune from evolution. Eliezer pointed this out on
this list decades ago, so I am not saying anything new.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220405/5de9f3b3/attachment.htm>

More information about the extropy-chat mailing list