[extropy-chat] Reading "The Singularity is Near"

Reason reason at longevitymeme.org
Thu Sep 22 04:28:40 UTC 2005


http://www.fightaging.org/archives/000612.php

I have been working my way through Ray Kurzweil's "The Singularity is Near"
(TSiN) over the past few days, having been the fortunate recipient of a
review copy. The book might alternatively be titled "The Modern Futurist
Consensus: a Review" or "Damien Broderick's The Spike: the Extended Remix."
Those of us who have haunted transhumanist enclaves in the past few years
(or more) are unlikely find new ideas here, but the book serves a most
useful purpose in bringing the best and brightest of transhumanist, futurist
themes and thinking all together under one roof, in a popularist manner,
with a unifying, easily-marketed theme. It's been done before - by the
aforementioned Damien Broderick, amongst others - but not quite as
comprehensively. This sort of book is something of a necessary precursor to
wider advocacy and education in today's culture; a pleasant irony, given the
subject matter, and one could debate where in the present S-curve in the
evolution of futurist thought TSiN fits.

My own two cents thrown into the ring say that the class of future portrayed
in TSiN is something of a foregone conclusion. It's quite likely that we'll
all be wildly, humorously wrong about the details of implementation, culture
and usage, but - barring existential catastrophy or disaster - the
technological capabilities discussed in TSiN will come to pass. The human
brain will be reverse engineered, simulated and improved upon. The same goes
for the human body; radical life extension is one desirable outcome of this
engineering process. We will merge with our machines as nanotechnology and
molecular manufacturing become mature technologies. Recursively
self-improving general artificial intelligence will develop, and then life
will really get interesting very quickly. And so forth ... the question is
not whether these things will happen, but rather when they will happen - and
more importantly, are we going to be alive and in good health to see this
wonderous future?

As you might guess, my criticisms of TSiN center around the timeline
predictions for development of new technologies, the acceleration of the
rate of discovery, and the management of complexity. I made a stab at
discussing this last item recently in connection with Arnold Kling's
comments on TSiN (which are well worth reading, by the way):

Progress towards general (and/or strong) artificial intelligence (AI) - a
grail for many transhumanists and other futurists - has been slower than
we'd like. The level of difficulty has been consistently underestimated in
the past, and I see this as one part of a larger underestimation of any form
of complexity management. You may recall seeing this idea put forward in a
variety of 1990s writing on the topic of nanotechnology; the production of
millions of nanorobots wasn't thought to be as hard as the process of
controlling and managing those nanorobots in a useful fashion - strategies
for information processing are as much the key to future medical
technologies as nanoscale and molecular manufacturing. Complexity is hard,
both to manage and estimate in advance.

Now replace "nanorobot" with "human cell" and that's where we are today with
biotechnology. Biological systems - such as your body, or even just a small
piece of it - are immensely complex. The reason researchers can make
meaningful progress today with medical technology such as gene therapies and
stem cell research is that they are, effectively, tweaking settings on
existing machinery that largely handles the complexity management itself.
Our grasp of how things work - based on our ability to process information
and build the tools required to gather information and effect change - is
now adequate for this task, just as it is almost adequate to guide existing
biological machinery to build replacement tissue and organs in a useful,
controlled manner. But it seems to me to be a very large leap - in terms of
managing complexity - to go from where we are today to reach the point of,
for example, replacing biochemically complex systems within the body with
artificial substitutes. Or reverse-engineering the brain, that sort of
thing.

Kurzweil's commentary on types of complexity in TSiN is a good read - and
one of the better explanations for the layman I've seen - but it seems a
little disconnected from the actual business of dealing with complexity in
ways that matter. My take on it all is that science is largely the process
of discovering keys to complexity; by this I mean finding algorithms,
recipies or methodologies that enable us humans to understand and manage
complexity that would otherwise be beyond us. To take an applied example,
manipulating stem cells through comparatively simple procedures enables
scientists to perform tasks - the regeneration of age-related tissue
damage - that they cannot even monitor in detail, let alone control. A
simpler and more abstract example would be the mathematics and physics of
atoms, comparatively simple equations that we can use to describe very
complex collections of objects and behaviors.

We humans are in the process of building tools that enable us to create or
meaningfully interact with ever-greater complexity, and computers are at the
heart of it, but this process is one in which our individual, unaided
capacities for complexity management are not increasing. Humans are still
humans as of this decade, and the keys we utilize have to be useable at our
level. I view the speeding of progress as part and parcel of building a
larger capacity for discovering and utilizing the keys to complexity. This,
as Kurzweil makes the case in TSiN, is a process that is growing
exponentially, and we are moving out of the timespan in which exponential
growth appears more linear.

There is one important area of complexity management in which we seem to be
making little headway, however: the organization of humans in business and
research. For all that we can now accomplish with faster computers and
enormous leaps in telecommunications, we don't seem to have made significant
inroads in getting large numbers of humans to cooperate efficiently. As
Arnold Kling points out, that the excessive use of Who Moved My Cheese? is
even in the running as something to try is not a good sign. I've been
involved in more technological attempts to improve efficiency in large
organizations, and the state of the art is not pretty - nor especially
effective in the grand scheme of things.

I am prepared to go out on a limb here, as I have done before, and say that
business and research cycles that involve standard-issue humans are
incompressible beneath a certain duration - they cannot be made to happen
much faster than is possible today:

I'm dubious about large reductions in the length of business or research
cycles through technology while humans are still in the loop. You can
certainly make the process cheaper and better, meaning that more attempts at
a given business or research model will operate in parallel, but there is a
point past which the length of the business cycle cannot be easily
compressed. That point is very much a function of the human element:
meetings, fundraising, decisions, organizational friction, and so forth -
all very time-consuming and proven very resistant to improvements in the
time taken.

This is not to say that they cannot be made cheaper. But cheaper doesn't
equate to faster business and research cycles; rather, it means that any
given problem will be tackled by many more parallel attempts. The
professionals are joined by skilled amateurs, the priesthood dissolves, and
everyone with a will to work gets in on the action. In this sort of a
market, any given problem (what business model works, how does this disease
process kill people, what does this biochemical signal do) is more likely to
be solved in a single cycle of innovation. Biotechnology is not too many
years away from this state of affairs, a repetition of what is currently
taking place in the software development industry. If matters become cheap
enough, people will be willing to risk ventures and research on incomplete
solutions, on untested business models, and thus shortcut the existing
cycle - but all to many forms of development are not vulnerable to this sort
of shortcut. The answers cannot always be guessed or jumped to on the basis
of incomplete work.

Back in the deep end, expensive projects mean conservative funding
organizations, which means organizational matters proceed at a slow pace.
This is a defining characteristic of our time: we have blindingly fast rates
of research and technological advances once the money is on the table, but
the cycles of business, fundraising and research are still chained to the
old human timetable. I regard this incompressibility of the business or
research cycle - the fact that a given iota of progress cannot be
accomplished as fast as technology allows because of human organizational
factors, and there is a certain minimum length of time taken to accomplish
this iota of progress - as a form of limit on exponential growth, one we are
now hitting up against.

Kurzweil's Singularity is a Vingean slow burn across a decade, driven by
recursively self-improving AI, enhanced human intelligence and the merger of
the two. Interestingly, Kurzweil employs much the same arguments against a
hard takeoff scenario - in which these processes of self-improvement in AI
occur in a matter of hours or days - as I am employing against his proposed
timescale: complexity must be managed and there are limits as to how fast
this can happen. But artificial intelligence, or improved human
intelligence, most likely through machine enhancement, is at the heart of
the process. Intelligence can be thought of as the capacity for dealing with
complexity; if we improve this capacity, then all the old limits we worked
within can be pushed outwards. We don't need to search for keys to
complexity if we can manage the complexity directly. Once the process of
intelligence enhancement begins in earnest, then we can start to talk about
compressing business cycles that existed due to the limits of present day
human workers, individually and collectively.

Until we start pushing these limits, we're still stuck with the slow human
organizational friction, limits on complexity management, and a limit on
exponential growth. Couple this with slow progress towards both
organizational efficiency and the development of general artificial
intelligence, and this is why I believe that Kurzweil is optimistic by at
least a decade or two.

So how does this all fold into healthy life extension? Well, physical
immortality is one obvious product of singularity-level nanotechnology,
biotechnology and complexity management. There are no known barriers in
physics to the construction of nanomedical systems capable of simultaneously
managing, repairing - or replacing - every cell in our bodies. Even
something as complex as the sum of all your cells can in principle be kept
in the best possible shape for as long as you like - "all" it takes is
knowledge, the future tools of nanoscale engineering and powerful enough
computers. But when do we get there? This is the question, and it is one
that shapes the actions of futurists and transhumanists. There are many who
believe that the best sort of activism and advocacy for the future - even
for healthy life extension - is in the area of artificial intelligence,
because making self-improving intelligence arrive earlier will lead to all
other currently pressing problems, such as age-related degeneration and
death, being rendered trivial in the mid to long term. Obviously, I'm not in
that camp: I'm sufficiently dubious about Kurzweil-like timescales - based
on my views as set forth above - to think that we need to be tackling the
problem of aging first.




More information about the extropy-chat mailing list