[ExI] Transhumanism at the IEEE

Rafal Smigrodzki rafal.smigrodzki at gmail.com
Wed Oct 9 01:03:29 UTC 2019


On Tue, Oct 8, 2019 at 7:07 PM spike jones via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> It's downright creepy I tells ya!  It's just wrong.  IEEE even mentioning
> transhumanism is something I never woulda thought I would live to see.
> It's
> uncomfortable, it makes me squirm.  I suddenly feel so... mainstream.
>
> ### I have been thinking on this subject recently (again). I am on record
predicting the Dark Singularity on September 14, 2029:

https://triviallyso.blogspot.com/2009/07/end-is-near.html

but in the last 10 years I came closer to Robin's position, being less
worried (but not worry-free) about the UFAI.

One reason for this evolution in my views is that it seems like
intelligence is a bit more granular than it seemed before. It is possible
to achieve vastly superhuman results in an increasing number of domains,
without creating a system capable of surviving on its own. These limited AI
systems are built by multiple teams and the resulting capabilities are
relatively widely spread, rather than concentrated. The limited superhuman
AIs (LiSAI) does rewrite some aspects of its programming but overall its
ability to create new goals is low.

It's useful to remember that goal-oriented action in the real world relies
on a number of disparate modules kludged together by evolution, and this
includes a large array of perception, motivation, reasoning and effector
subsystems. A dangerous AI would need to have these modules working at
superhuman level separately and jointly, smoothly enough not to break
itself in a trivial fashion, before its malfunctioning non-human-aligned
goal system could lead it to accidentally overgrow and break the world.

This kind of non-brittle real-world multi-faceted highly intelligent
performance is likely to be built gradually by multiple independent teams,
as Robin predicted a long time ago. At the same time I would expect that we
would develop and have wide access to LISAI specifically designed to
counter narrow threats, such as hostile takeover of computing resources.
The sysadmins of 2029 will have vastly superhuman detection and management
tools at their disposal, augmenting their ability to prevent rogue attacks
on their systems, designed to defend against increasingly sophisticated
cyberattacks from governments and other hackers, essentially honing their
strength in an ongoing low-level computational war.

I would also expect that the teams developing general AI would be forced to
develop a much better understanding of goal systems in general, simply to
avoid trivial failures. MIRI seems to be trying to develop a high-level
theoretical understanding of goal systems, which is very laudable, but
there will be a lot of specific technical applied research needed before
something that doesn't quickly shoot itself in the foot is created. This
means that the right from the start the potentially dangerous AI would be
surrounded by a much more sophisticated ecosystem of diagnostic tools and
general understanding of goal systems than exists today.

TL:DR - By 2029, we might have potentially dangerous AI but its ability to
wreak havoc will be limited by our improved defenses and its creators will
be much better at taming AIs in general than we are now.

This is not to discount the dangers of AI, including existential dangers. I
think that as far as existential dangers go, UFAI is still the 800 lb
monster ahead of the pack, with bio-engineered plagues a distal second, and
asteroids, aliens and other apocalypses relegated to the footnotes. But,
it's less scary to me than it was 25 years ago, now at the level of nuclear
all-out war rather than the grey goo meltdown.

Rafal
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20191008/037e152d/attachment-0001.htm>


More information about the extropy-chat mailing list