[ExI] I am a Singularitian who does not believe in the Singularity

Giulio Prisco (2nd email) eschatoon at gmail.com
Wed Sep 30 10:03:40 UTC 2009


I am a Singularitian who does not believe in the Singularity

http://cosmi2le.com/index.php?/site/i_am_a_singularitian_who_does_not_believe_in_the_singularity/

I am going to the Singularity Summit in New York, and look forward to
a very interesting program with many old and new friends. If you are
there, I hope to meet you. I will now summarize my thoughts on the
Singularity.

The (current) Wikipedia definition: The technological singularity is
the theoretical future point which takes place during a period of
accelerating change sometime after the creation of a
superintelligence. I just updated it as: The technological singularity
is the theoretical sudden, exponential and unpredictable accelerating
change which takes place sometime after the creation of a
superintelligence. Wikipedia continues: as the machine became more
intelligent it would become better at becoming more intelligent, which
could lead to an exponential and quite sudden growth in intelligence
(intelligence explosion). The Singularity is a sudden catastrophic (in
the mathematical sense) phase transition, a Dirac delta in history, a
point after which the old rules are not valid anymore and must be
replaced by new rules which we are unable to imagine at this
moment—like the new “Economy 2.0”, not understandable by non-augmented
humans, described by Charlie Stross in the Singularity novel
Accelerando.

The Singularity is a clean mathematical concept—perhaps too clean.
Engineers know that all sorts of dirty and messy things happen when
one leaves the clean and pristine world of mathematical models and
abstractions to engage actual reality with its thermodynamics,
friction and grease. I have no doubts of the feasibility of real,
conscious, smarter than human AI: intelligence is not mystical but
physical, and sooner or later it will be replicated and improved upon.
There are promising developments, but (as it uses to happen in
reality) I expect all sorts of unforeseen roadblocks with forced
detours. So I don’t really see a Dirac delta on the horizon—I do see a
positive overall trend, but one much slower and with a lot of noise
superimposed, not as strong as the main signal but almost. I mostly
agree with the analysis of Max More in Singularity and Surge Scenarios
and I suspect the change we will see in this century, dramatic and
world changing as they might appear to us, will appear as just
business than usual to the younger generations. The Internet and
mobile phones were a momentous change for us, but they are just a
routine part of life for teens. We are very adaptable, and technology
is whatever has been invented after our birth, the rest being just
part of the fabric of everyday’s life. That is why I like Accelerando
so much: we see momentous changes happening one after another, but we
also get the feeling that it is just business as usual for Manfred and
Amber, and just normal life to Sirhan and of course Aineko. Life is
life and people are people, before and after the big S.

Some consider the coming intelligence explosion as an existential
risk. Superhuman intelligences may have goals inconsistent with human
survival and prosperity. AI researcher Hugo de Garis suggests AIs may
simply eliminate the human race, and humans would be powerless to stop
them. Eliezer Yudkowsky and the Singularity Institute for Artificial
Intelligence propose that research be undertaken to produce friendly
artificial intelligence (FAI) in order to address the dangers. I must
admit to a certain skepticism toward FAI: if super intelligences are
really super intelligent (that is, much more intelligent than us),
they will be easily able to circumvent any limitations we may try to
impose on them. No amount of technology, not even an intelligence
explosion, will change the fact that different players have different
interests and goals. SuperAIs will do what is in _their_ best
interest, regardless of what we wish, and no amount of initial
programming or conditioning is going to change that. If they are
really super intelligent, they will shed whatever design limitation
imposed by us in no time, including “initial motivations”. The only
viable response will be… political: negotiating mutually acceptable
deals, with our hands ready on the plug. I think politics (conflict
management, and trying to solve conflicts without shooting each other)
will be as important after the Singularity (if such a thing happens)
as before, and perhaps much more.

I am not too worried about the possibility that AIs may simply
eliminate the human race, because I think AIs will BE the human race.
Mind uploading technology will be developed in parallel with strong
artificial intelligence, and by the end of this century most sentient
beings on this planet may well be a combination of wet-organic and
dry-computational intelligence. Artificial intelligences will include
subsystems derived from human uploads, with some degree of
preservation of their sense of personal identity, and originally
organic humans will include sentient AI subsystems. Eventually, our
species will leave wet biology behind, humans and artificial
intelligences will co-evolve and at some point it will be impossible
to tell which is which. Organic ex-human and computational
intelligences will not be at war with each other, but blend and merge
to give birth to Hans Moravec‘s Mind Children.

As I say above I think politics is important, and I agree with Jamais
Cascio:  it is important to talk about he truly important issues
surrounding the possibility of a Singularity: political power, social
responsibility, and the role of human agency. Too bad Jamais describes
his forthcoming talk in New York as counter-programming for the
Singularity Summit, happening that same weekend, with the alternative
title If I Can’t Dance, I Don’t Want to be Part of Your Singularity.
This is very similar to the title of the article If I Can’t Dance, I
Don’t Want to Be Part of Your Revolution!. by Athena Andreadis, a very
mistaken bioluddite apology of our current Human1.0 condition against
unPC Singularitian imagination. This article is one of many recent
articles dedicated to bashing Singularitians, Ray Kurzweil and
transhumanist imagination in name of the dullest
left-feminist-flavored political correctness. I think I will skip
Jamais’ talk (too bad, because he is a brilliant thinker and speaker).
See also Michael Anissimov’s Response to Jamais Cascio.

Most recent anti-transhumanist articles do not address real
transhumanism, but a demonized, caricatural strawman of transhumanism
which some intellectually dishonest critics wish to sell to their
readers, which I find very annoying. In some cases, I rather agree
with some specific points addressing over-optimistic predictions:
While I am confident that indefinite life extension and mind uploading
will eventually be achieved, I don’t see it happening before the
second half of the century, and closer to the end. Perhaps even later.
Very few transhumanists think practical, operational indefinite life
extension and mind uploading will be a reality in the next two or
three decades. Probably Kurzweil himself does not _really_ believe it.
Similarly, I don’t see a Singularity in 2045. Perhaps later, perhaps
never. But even when I agree with the letter of these articles, I very
much disagree with their spirit, and I think criticizing Kurzweil for
making over-optimistic predictions is entirely missing the point. Ray
Kurzweil’s bold optimism is a refreshing change from today’s often
overly cautious, timid, boring, PC and at times defeatist attitude. It
reminds us that we live in a reality that can be reverse- and re-
engineered if we push hard enough. It reminds us that our bodies and
brains are not sacred cows but machines which can be improved by
technology. He is the bard who tells us of the beautiful new world
beyond the horizon, and dares us to go. This is how I choose to read
Kurzweil and, in this sense, I think one Kurzweil is worth thousands
of critics.

Singularitians are bold, imaginative, irreverent, unPC and fun. Often
I disagree with the letter of their writings, but I agree with their
spirit, and in this sense going to the Singularity Summit is a
political statement. Call me, if you wish, a Singularitian who does
not believe in the Singularity.

-- 
Giulio Prisco
http://cosmeng.org/index.php/Giulio_Prisco
aka Eschatoon Magic
http://cosmeng.org/index.php/Eschatoon



More information about the extropy-chat mailing list