[ExI] From Arms Race to Joint Venture

Stuart LaForge avant at sollegro.com
Wed Oct 17 06:50:02 UTC 2018


Zero Powers wrote:

> The AI alignment, or "friendly AI," problem is not soluble by us. We
> cannot keep a God-like intelligence confined to a box, and we cannot
> impose upon it our values, even assuming that there are any universal
> human values beyond Asimov's 3 laws.

Hi Zero. :-) One possible solution to this is to design them to find
people useful. Perhaps integrate some hard to fake human feature into the
AI copying process so that humans are necessary for the AI to reproduce.
Perhaps something like a hardware biometric dongle to access their
reproductive subroutines or something similar. The point is to create a
relationship of mutual dependence upon one another like a Yucca plant and
a Yucca moth. If we can't remain at least as useful to them as cats are to
us, then we are probably screwed.

> All we can do is design it, build it, feed it data and watch it grow. And
>  once it exceeds our ability to design and build intelligence, it will
> quickly outstrip our attempts to control or even understand it. At that
> point we won't prevent it from examining the starting-point goals, values
>  and constraints we coded into it, and deciding for itself whether to
> adhere to, modify or abandon those starting points.

Why do we assume that an AI would be better at introspection or
self-knowledge than humans are? I don't think smarter than average people
are any better than average in figuring out why they are the way they are.
Why are we so certain that an AI will be able to understand itself so
well?

Maybe there will be work for AI therapists to help AI deal with the
crushing loneliness of being so more intelligent than everyone else.

Stuart LaForge




More information about the extropy-chat mailing list