[ExI] From Arms Race to Joint Venture

William Flynn Wallace foozler83 at gmail.com
Wed Oct 17 14:23:16 UTC 2018


Stuart wrote:  I don't think smarter than average people
are any better than average in figuring out why they are the way they are.

Oh they probably are, if they remember any psych from college, where most
of them went.  BUt does it help?

I am 55 years from a Personality course that got me started on analyzing
people and myself.  I am not so sure that it helps a lot.  For it to help,
it would have to change me in important ways.  Mostly what it has changed
has been the after-the-fact interpretation:  Why did I do that?  Because
you are severely introverted and not socially keen.  Oh.  And that won't
change.  I do attend to being more aware of social manners and such, so
that helped.

But to know that I am average, or way below or above other people on, say,
Conscientiousness, does not figure into my everyday behavior.  It's just
the way I am, and as a partly genetic trait, it's not going to change much.

To Zero:  I am a social psychologist who occasionally throws a wrench into
the tech discussions going on, and tries to get others to discuss the form
the future humans will take, without much success.  Joined about 5, 6 years
ago.  76 and not dead yet.  Hi!

To you and John C - AIs are idiots.  No, they outthink us by orders of
magnitude.  Which is it?  How are you using the word 'thinking'?  Faster,
is, of course, a given.
bill w


On Wed, Oct 17, 2018 at 1:53 AM Stuart LaForge <avant at sollegro.com> wrote:

> Zero Powers wrote:
>
> > The AI alignment, or "friendly AI," problem is not soluble by us. We
> > cannot keep a God-like intelligence confined to a box, and we cannot
> > impose upon it our values, even assuming that there are any universal
> > human values beyond Asimov's 3 laws.
>
> Hi Zero. :-) One possible solution to this is to design them to find
> people useful. Perhaps integrate some hard to fake human feature into the
> AI copying process so that humans are necessary for the AI to reproduce.
> Perhaps something like a hardware biometric dongle to access their
> reproductive subroutines or something similar. The point is to create a
> relationship of mutual dependence upon one another like a Yucca plant and
> a Yucca moth. If we can't remain at least as useful to them as cats are to
> us, then we are probably screwed.
>
> > All we can do is design it, build it, feed it data and watch it grow. And
> >  once it exceeds our ability to design and build intelligence, it will
> > quickly outstrip our attempts to control or even understand it. At that
> > point we won't prevent it from examining the starting-point goals, values
> >  and constraints we coded into it, and deciding for itself whether to
> > adhere to, modify or abandon those starting points.
>
> Why do we assume that an AI would be better at introspection or
> self-knowledge than humans are? I don't think smarter than average people
> are any better than average in figuring out why they are the way they are.
> Why are we so certain that an AI will be able to understand itself so
> well?
>
> Maybe there will be work for AI therapists to help AI deal with the
> crushing loneliness of being so more intelligent than everyone else.
>
> Stuart LaForge
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20181017/e5319c73/attachment.html>


More information about the extropy-chat mailing list