[ExI] Fwd: Re: AI risks

Rafal Smigrodzki rafal.smigrodzki at gmail.com
Mon Sep 7 01:19:59 UTC 2015

On Sun, Sep 6, 2015 at 4:33 PM, spike <spike66 at att.net> wrote:
> Ja.  Advancing technology gives users increasing power.  Power corrupts.
> Somehow we need to find ways to compensate, to limit the damage corrupted
> power can do.

### I don't think that AI gives an intrinsic, uncounterable advantage to
agents of a specific ethical inclination. It is different from e.g. ICBM
nuclear weapons: The bomb gives an intrinsic advantage to agents who do not
mind slaughtering billions of people. The only realistic counter so far was
to become one of them - be willing to slaughter billions of people in

I feel that AI involves so many shades and grades of power that benevolent
agents may have wide latitude to develop powers that do not completely
corrupt. In some games a benevolent agent may be forced to make deeply
unethical moves or face complete failure, eventually erasing the
distinction between benevolence and malice (which may be seen as a failure
mode). In the multicentric, repeated AI game there will be twists and turns
during the coming Ediacaran radiation of minds. I still think there is a
large chance of gigadeath events but somehow over the last few years I
shifted my assessment of odds towards a more optimistic one... instead of
90% chance of the End I feel it might be only 10%.

I am de-Eliezering while Robining. I couldn't point to any definitive new
data responsible for this shift. But there are some inklings. I just
finished reading an interesting book, "Arrival of the Fittest" by Andreas
Wagner. He specializes in the analysis of multidimensional graphs that
describe various aspects of naturally evolving systems. It appears that
complex multidimensional networks exhibit a surprising degree of
robustness, that is the ability to avoid local fitness maxima. The key to
this robustness lies in the multidimensionality *and* complexity of the
network - if your network has low dimensionality or insufficient
complexity, it becomes brittle. I don't have the time to try to describe
the meaning of complexity and dimensionality in this context (check out the
book), but it increasingly looks to me like the AI development community
may have a fair amount of complexity and dimensionality (many independent
agents chasing many viable approaches) to build robust devices.

If the hazy similarities that I see between evolution and AI development
are real, we should be reasonably OK.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20150906/09c9e011/attachment.html>

More information about the extropy-chat mailing list