[ExI] From Arms Race to Joint Venture (Zero Powers)

Keith Henson hkeithhenson at gmail.com
Wed Oct 17 23:18:56 UTC 2018

 On Tue, Oct 16, 2018 at 9:48 PM  Zero Powers <zero.powers at gmail.com> wrote:


> >> Though I see no solution to the God-in-a-box problem, there are some steps
> >> I think we as a species should take immediately: First and foremost is
> >> global collaboration and coordination. Right now we're in a competitive,
> >> multi-party arms race. Google, Facebook, Amazon, DARPA and China (just to
> >> name a few) are racing to cross the finish line first, realizing (if not
> >> publicly admitting) that the first to build a GI will win the world.

Certainly this is the way people think.  It is, of course, silly
because it leaves out the the real winner of the arms race, which is
the "weapon" itself  The chances of even the driving memes of the arms
race participants surviving examination by a GI is near zero.

Our evolutionary history including our cultural history makes this
arms race unprecedented.   The goal, if reached successfully,
introduces another player, a player better than any of the other

The situation is too weird for the participants to analyze.  And even
if they stopped to think about it, what could they do?

> >> From that perspective it makes perfect sense to pour all available resources
> >> into being first to market with an artificial God. But with stakes this
> >> high, we cannot afford a winner-take-all outcome. If there is one winner
> >> and 7 billion losers, no one wins.

It could be that we stand on the edge of the whole race going extinct
and being replaced by better thinkers.

But hidden in the assumptions is that there could be one "winner."  I
don't think this makes sense from physics.  Even if we don't know even
roughly what size is best, I think we can say there is one because the
larger a thinking entity is, the slower it will think.  So if you made
a large volume into thinking stuff, I suspect the stuff will partition
into whatever is an optimal size.

Another problem is figuring out what goals a GI might have.  Any
thoughts on the topic?


PS  You can see my thoughts about the motivationally limited AI,
Suskulan, in "The Clinic Seed."   It happens to have been discussed
here almost ten years ago.


More information about the extropy-chat mailing list