[ExI] Paul vs Eliezer

Jason Resch jasonresch at gmail.com
Tue Apr 5 05:42:09 UTC 2022


On Tue, Apr 5, 2022, 12:30 AM spike jones via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
>
>
> *…*> *On Behalf Of *Jason Resch via extropy-chat
> *Subject:* Re: [ExI] Paul vs Eliezer
>
>
>
> Rafal,
>
>
>
>>
>
>
> >…The net result then should be a more positive, more perfect world, based
> on motives and values that are already universal to all conscious life.
>
>
>
> Jason
>
>
>
>
>
>
>
>
>
> Jason you had me right up until the last sentence of your post.  There are
> no universal goals to all conscious life, not even the most basic,
> self-preservation.  Exceptions are rare, such as the perpetrator of
> murder/suicide, but they exist.
>

You're right, they're not universal for individual intelligences that may
be irrational, delusional or sadistic. But to speak generally of
civilizations or a rational agents, I think these goals are broadly held,
and may arise naturally in any generally intelligent AI allowed to progress
unboundedly.

This is not to say that we couldn't create an AI that had pathological
motivations and no capacity to change them, but I think the fear thet any
intelligence explosion inevitably or naturally leads to unfriendly AI is
overblown.

Aside from the universal utility of conscious experiences, there's also the
idea of "open individualism". If this idea is true, then a
superintelligenece should come to accept it as true. The rational outcome
of accepting open individualism as true is to extend self interest to the
interest of all conscious beings. It therefore would provide ab objective
foundation of ethics not unlike the golden rule.



The notion of friendly AI is to not accidentally create a super-enabled
> perpetrator who wants to end its own existence and take as many other
> conscious beings along as it can.
>

I agree this is a worthy goal, few disasters are worse than an unfriendly
superintelligenece.


>
> I am generally an optimist (from most points of view an absurdly
> self-delusional optimist) but I don’t trust AI to be sane.  It didn’t have
> the same evolutionary path as we do, so evolutionary psychology do not
> shape its motives, we do.
>
>
>

I think for most of the foreseeable paths forward, with all current deep
learning approaches based on learning and training, our direct control over
an AIs development may be limited to the data we provide it. But once it
reaches super human levels, what is to stop this AI from escaping it's
sandbox and accessing all data everywhere on the internet? Or from it
generating unlimited amounts if it's own data via simulation and
exploration of mathematical realities?

In such a case, our influence over the form this mind ultimately assumes
may be very limited. It might be like a kindergarten teacher trying to
teach a lesson to a prodigy, who soon outclasses the teacher and goes to
the library to read every book, then goes further to discover all the
errors contained in those books, then starts writing it's own.

Regardless of what the teacher tried to teach, and ultimately regardless of
the contents of the books, the superintelligenece gets to the same place in
the end.


Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220405/579b8d9b/attachment.htm>


More information about the extropy-chat mailing list