[ExI] From Arms Race to Joint Venture
zero.powers at gmail.com
Wed Oct 17 02:21:27 UTC 2018
On Tue, Oct 16, 2018 at 8:02 AM Dave Sill <sparge at gmail.com> wrote:
> You've made assertions but provided no evidence of them or even
> definitions of the terms, so debating them is difficult.
The evidence is this: We have a demonstrated ability to continually improve
the efficiency of our computers and algorithms. Our advances in this
technology are speeding up, not slowing down. The market and military
stakes are driving all the players in this space to improve the
capabilities of AI as quickly as possible with absolutely no consideration
to slowing down long enough to give any serious consideration to the
China and Google make no pretense about being in an all out AI arms race.
And they are hardly the only ones racing. It doesn't take all that much
foresight to predict how the story ends.
I don't think "godlike intelligence" is equivalent to omnipotence.
> Intelligence isn't really that powerful all by itself; it's got to be
> combined with knowledge and the ability to interact with other
> intelligences/systems in order to effect change. A "perfect" intelligence
> in a box, without the knowledge that it's in a box and without the power to
> get out of the box isn't going anywhere.
The essence of machine learning is precisely feeding data (knowledge) to
algorithms. Data is a neural network's mother's milk. I suppose one
strategy would be to keep the internet a secret from your AI, even one who
out-thinks you by several orders of magnitude. But that seems a very feeble
plan at best.
All we can do is design it, build it, feed it data and watch it grow. And
>> once it exceeds our ability to design and build intelligence, it will
>> quickly outstrip our attempts to control or even understand it. At that
>> point we won't prevent it from examining the starting-point goals, values
>> and constraints we coded into it, and deciding for itself whether to adhere
>> to, modify or abandon those starting points.
> Assuming we could design and build such a thing, which is a huge leap
> given that we haven't achieved idiot-level AI, wouldn't it be pretty
> foolish to give it unlimited knowledge and power?
It would be immensely foolish. But that's precisely what we're doing our
level best to do. An algorithm recently taught itself, from scratch, in a
matter of mere hours, to be the best Go player in the thousand year history
of the game. I think we're well beyond idiot-level AI.
Once we understand that AGI will be God-like compared to us, we should be
>> able to grasp the intractability of the problem. In fact, it might be
>> helpful to adopt the term GI (for God-like Intelligence) rather than AI or
>> AGI, just to keep us mindful about what we're dealing with.
> What exactly does "God-like" mean to you?
“For as the heavens are higher than the earth, so are my ways higher than
your ways, and my thoughts than your thoughts.”
Isaiah 55:9 KJV
AI won't need to create a universe, or raise a dead man to life (though
these feats might not be outside it's ability). All it need do is process
information in ways beyond human ability to comprehend, and be able to
recursively improve itself. That will be sufficiently God-like enough for
Though I see no solution to the God-in-a-box problem, there are some steps
>> I think we as a species should take immediately: First and foremost is
>> global collaboration and coordination. Right now we're in a competitive,
>> multi-party arms race. Google, Facebook, Amazon, DARPA and China (just to
>> name a few) are racing to cross the finish line first, realizing (if not
>> publicly admitting) that the first to build a GI will win the world. From
>> that perspective it makes perfect sense to pour all available resources
>> into being first to market with an artificial God. But with stakes this
>> high, we cannot afford a winner-take-all outcome. If there is one winner
>> and 7 billion losers, no one wins.
> If you're right we're undoubtedly screwed, because there's zero chance
> that all of the parties involved will join hands and sing Kumbaya.
Certainly the odds are pretty close to zero. But let's hope not quite.
Otherwise we're both right-we're undoubtedly screwed.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat