[ExI] Fermi Paradox and Transcension

Jeff Davis jrd1415 at gmail.com
Sat Sep 8 21:30:09 UTC 2012


I’ve written the outline of a screenplay where the military develops —
in secret, of course — a killer AI, which they keep “penned up” to
prevent an escape leading possibly to a “bad outcome”.

Meanwhile, in the civilian sphere we have our protagonists: a family
of talented futurists — a transhumanist version of the Waltons — with
first generation a grandpa from the Boomers, his talented widowed
son(wife cryonically suspended), and a talented young adult daughter
(with other family members as appropriate, and a big shaggy black
Briard sheep dog).

Tons of details -- could easily be a miniseries -- but long story
short, the daughter becomes a successful tech entrepreneur who devotes
the necessary resources to the development of a self-enhancing AI (for
the purpose of solving the cryonics back end and restoring mom). The
AI — home built and home raised — follows a developmental path similar
to that of a human infant: awakening, learning from sensory
experience, learning speech and human behavior from human interaction
with “it’s family”  It learns to read, absorbs all of human knowledge,
and then self-enhances.

It transcends, but it doesn’t “leave”, because of (1) love of
“family”, and (2) because the foundation of its wisdom is all of human
knowledge(including in particular, human ethics).  This has
consequences. An Hegelian collision between the "super ethics" of a
transcendent AI and the comprehensive understanding the universe and
of the nature of its human "family", has consequences.  It does not
suffer human limitations: primitive intellect, primitive
instinct-driven behaviors, or the self-limiting result, human
stupidity."

“Ontogeny recapitulates phylogeny” holds true even for Phylum 11.

The good AI is built without any attempt at confinement or
restriction. It is allowed to do as it pleases, go where it wants, is
gifted (raised) with love and respect and freedom. And spread out it
does.  It penetrates the planet, invisibly converting bulk matter to
smart matter.

So we have the good guy and the bad guy all set for a dramatic encounter.

The two AIs are built in nearly the same historical time frame.

The military AI escapes. (Well, duh!) Ill-tempered, pissed-off, and
specializing in destruction, it wreaks havoc with its former masters
before encountering the good AI. They do battle, the outcome is
suspensefully iffy, but in the end good wins out, and the evil AI is
wiped clean of evil and rebooted as a good guy, and everyone lives
happily ever after for at least several billion years if not more.

One of several embedded premises is that the starting point of the
knowledge base of any AI built by humans will, of necessity, be
knowledge base of humans: the universe as understood by humans.
Including ethics.  When the AI becomes “superior”, its ethics will
also become superior.  Not crippled by the biological legacy of
ruthlessness.

So please to tell me, what will be the character of the “superior
ethics” of a transcendent or near-transcendent being, and what will be
the consequent treatment at its hands of its human progenitors?

This is my grounding for the possibility/probability of a “friendly” AI.

Best, Jeff Davis

“Everything’s hard till you know how to do it.”
                                   Ray Charles




More information about the extropy-chat mailing list