[ExI] From Arms Race to Joint Venture

Dave Sill sparge at gmail.com
Tue Oct 16 14:58:35 UTC 2018

On Tue, Oct 16, 2018 at 8:29 AM Zero Powers <zero.powers at gmail.com> wrote:

> The AI alignment, or "friendly AI," problem is not soluble by us. We
> cannot keep a God-like intelligence confined to a box, and we cannot impose
> upon it our values, even assuming that there are any universal human values
> beyond Asimov's 3 laws.

You've made assertions but provided no evidence of them or even definitions
of the terms, so debating them is difficult. I don't think "godlike
intelligence" is equivalent to omnipotence. Intelligence isn't really that
powerful all by itself; it's got to be combined with knowledge and the
ability to interact with other intelligences/systems in order to effect
change. A "perfect" intelligence in a box, without the knowledge that it's
in a box and without the power to get out of the box isn't going anywhere.

All we can do is design it, build it, feed it data and watch it grow. And
> once it exceeds our ability to design and build intelligence, it will
> quickly outstrip our attempts to control or even understand it. At that
> point we won't prevent it from examining the starting-point goals, values
> and constraints we coded into it, and deciding for itself whether to adhere
> to, modify or abandon those starting points.

Assuming we could design and build such a thing, which is a huge leap given
that we haven't achieved idiot-level AI, wouldn't it be pretty foolish to
give it unlimited knowledge and power?

Once we understand that AGI will be God-like compared to us, we should be
> able to grasp the intractability of the problem. In fact, it might be
> helpful to adopt the term GI (for God-like Intelligence) rather than AI or
> AGI, just to keep us mindful about what we're dealing with.

What exactly does "God-like" mean to you?

Though I see no solution to the God-in-a-box problem, there are some steps
> I think we as a species should take immediately: First and foremost is
> global collaboration and coordination. Right now we're in a competitive,
> multi-party arms race. Google, Facebook, Amazon, DARPA and China (just to
> name a few) are racing to cross the finish line first, realizing (if not
> publicly admitting) that the first to build a GI will win the world. From
> that perspective it makes perfect sense to pour all available resources
> into being first to market with an artificial God. But with stakes this
> high, we cannot afford a winner-take-all outcome. If there is one winner
> and 7 billion losers, no one wins.

If you're right we're undoubtedly screwed, because there's zero chance that
all of the parties involved will join hands and sing Kumbaya.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20181016/50960098/attachment.html>

More information about the extropy-chat mailing list