[ExI] invisible singularity
Damien Broderick
thespike at satx.rr.com
Thu Sep 16 23:27:10 UTC 2010
On 9/16/2010 5:32 PM, Gregory Jones wrote:
> Several years ago I suggested trying to create a thought-space map on
> the singularity, to see how many significantly differing scenarios we
> could develop. Perhaps naming and numbering them or something, some
> kind of organization structure.
> At the time the suggestion was ignored. Rather it was dismissed: the
> leading thinkers in that field insisted there is only one logical
> scenario for this singularity.
Well, I'd offered a fairly coarse taxonomy at least a decade ago, in the
revised edition of THE SPIKE. I used that in a speech I gave at a
conference in Australia in 2000, available at
<http://www.panterraweb.com/tearing1.htm>
Here's the subheads (and of course some of these opinions--e.g.
Eliezer's, I gather--have altered in the intervening decade):
...We need to simplify in order to do that, take just one
card at a time and give it priority, treat it as if it were the only big
change, modulating everything else that falls under its shadow. It's a
risky gambit, since it has never been true in the past and will not
strictly be true in the future. The only exception is the dire (and
possibly false) prediction that something we do, or something from
beyond our control, brings down the curtain, blows the whistle to end
the game. So let's call that option
[A i] No Spike, because the sky is falling
In the second half of the 20th century, people feared that nuclear war
(especially nuclear winter) might snuff us all out. Later, with the
arrival of subtle sensors and global meteorological studies, we worried
about ozone holes and industrial pollution and an anthropogenic
Greenhouse effect combining to blight the biosphere. Later still, the
public became aware of the small but assured probability that our world
will sooner or later be struck by a `dinosaur-killer' asteroid, which
could arrive at any moment. For the longer term, we started to grasp the
cosmic reality of the sun's mortality, and hence our planet's: solar
dynamics will brighten the sun in the next half billion years, roasting
the surface of our fair world and killing everything that still lives
upon it. Beyond that, the universe as a whole will surely perish one way
or another.
Take a more optimistic view of things. Suppose we survive as a
species, and maybe as individuals, at least for the medium term (forget
the asteroids and Independence Day). That still doesn't mean there must
be a Spike, at least in the next century or two. Perhaps artificial
intelligence will be far more intractable than Hans Moravec and Ray
Kurzweil and other enthusiasts proclaim. Perhaps molecular
nanotechnology stalls at the level of micro-electronic machines (MEMS)
that have a major impact but never approach the fecund cornucopia of a
true molecular assembler (a `mint', or Anything Box). Perhaps matter
compilers or replicators will get developed, but the security states of
the world agree to suppress them, imprison or kill their inventors,
prohibit their use at the cost of extreme penalties. Then we have option
[A ii] No Spike, steady as she goes
This obviously forks into a variety of alternative future histories, the
two most apparent being
[A ii a] Nothing much ever changes ever again
which is the day-to-day working assumption I suspect most of us default
to, unless we force ourselves to think hard. It's that illusion of
unaltered identity that preserves us sanely from year to year, decade to
decade, allows us to retain our equilibrium in a lifetime of such
smashing disruption that some people alive now went through the whole
mind-wrenching transition from agricultural to industrial to
knowledge/electronic societies. It's an illusion, and perhaps a
comforting one, but I think we can be pretty sure the future is not
going to stop happening just as we arrive in the 21st century.
The clearest alternative to that impossibility is
[A ii b] Things change slowly (haven't they always?)
Well, no, they haven't. This option pretends to acknowledge a century of
vast change, but insists that, even so, human nature itself has not
changed. True, racism and homophobia are increasingly despised rather
than mandatory. True, warfare is now widely deplored (at least in rich,
complacent places) rather than extolled as honorable and glorious.
Granted, people who drive fairly safe cars while chatting on the mobile
phone live rather... strange... lives, by the standards of the
horse-and-buggy era only a century behind us. Still, once everyone in
the world is drawn into the global market, once peasants in India and
villagers in Africa also have mobile phones and learn to use the
Internet and buy from IKEA, things will... settle down. Nations
overburdened by gasping population pressures will pass through the
demographic transition, easily or cruelly, and we'll top out at around
10 billion humans living a modest but comfortable, ecologically
sustainable existence for the rest of time (or until that big rock arrives).
A bolder variant of this model is
[A iii] Increasing computer power will lead to human-scale AI, and
then stall.
But why should technology abruptly run out of puff in this fashion?
Perhaps there is some technical barrier to improved miniaturisation, or
connectivity, or dense, elegant coding (but experts argue that there
will be ways around such road-blocks, and advanced research points to
some possibilities: quantum computing, nanoscale processors). Still,
natural selection has not managed to leap to a superintelligent variant
of humankind in the last 100,000 years, so maybe there is some
structural reason why brains top out at the Murasaki, Einstein or van
Gogh level.
So AI research might reach the low-hanging fruit, all the way
to human equivalence, and then find it impossible (even with machine
aid) to discern a path through murky search space to a higher level of
mental complexity. Still, using the machines we already have will not
leave our world unchanged. far from it. And even if this story has some
likelihood, a grislier variant seems even more plausible.
[A iv] Things go to hell, and if we don't die we'll wish we had
This isn't the nuclear winter scenario, or any other kind of doom by
weapons of mass destruction--let alone grey nano goo, which by
hypothesis never gets invented in this denuded future. Technology's
benefits demand a toll from the planet's resource base, and our polluted
environment. The rich nations, numerically in a minority, notoriously
use more energy and materials than the rest, pour more crap into air and
sea. That can change--must change, or we are all in bad trouble--but in
the short term one can envisage a nightmare decade or two during which
the Third World `catches up' with the wealthy consumers, burning cheap,
hideously polluting soft coal, running the exhaust of a billion and more
extra cars into the biosphere...
Some Green activists mock `technical fixes' for these problems,
but those seem to me our best last hope.[6] We are moving toward
manufacturing and control systems very different from the wasteful,
heavy-industrial, pollutive kind that helped drive up the world's
surface temperature by 0.4 to 0.8 degrees Celsius in the 20th century.[7]
Pollsters have noted incredulously that people overwhelmingly state
that their individual lives are quite contented and their prospects
good, while agreeing that the nation or the world generally is heading
for hell in a hand-basket. It's as if we've forgotten that the vice and
brutality of television entertainments do not reflect the true state of
the world, that it's almost the reverse: we revel in such violent
cartoons because, for almost all of us, our lives are comparatively
placid, safe and measured. If you doubt this, go and live for a while in
medieval Paris, or palaeolithic Egypt (you're not allowed to be a noble).
Roads from here and now to the Spike
I assert that all of these No Spike options are of low probability,
unless they are brought forcibly into reality by the hand of some
Luddite demagogue using our confusions and fears against our own best
hopes for local and global prosperity. If I'm right, we are then pretty
much on course for an inevitable Spike. We might still ask: what,
exactly, is the motor that will propel technological culture up its
exponential curve?
Here are seven obvious distinct candidates for paths to the
Spike (separate lines of development that in reality will interact,
generally hastening but sometimes slowing each other):
[B i] Increasing computer power will lead to human-scale AI, and then
will swiftly self-bootstrap to incomprehensible superintelligence.
This is the `classic' model of the singularity, the path to the
ultraintelligent machine and beyond. But it seems unlikely that there
will be an abrupt leap from today's moderately fast machines to a
fully-functioning artificial mind equal to our own, let alone its
self-redesigned kin--although this proviso, too, can be countered, as
we'll see. If we can trust Moore's Law--computer power currently
doubling every year--as a guide (and strictly we can't, since it's only
a record of the past rather than an oracle), we get the kinds of
timelines presented by Ray Kurzweil, Hans Moravec, Michio Kaku, Peter
Cochrane and others, explored at length in The Spike. Let's briefly
sample those predictions.
Peter Cochrane: several years ago, the British Telecom futures
team, led by their guru Cochrane, saw human-level machines as early as
2016. Their remit did not encompass a sufficiently deep range to sight a
Singularity.
Ray Kurzweil:[8] around 2019, a standard cheap computer has the
capacity of a human brain, and some claim to have met the Turing test
(that is, passed as conscious, fully responsive minds). By 2029, such
machines are a thousand times more powerful. Machines not only ace the
Turing test, they claim to be conscious, and are accepted as such. His
sketch of 2099 is effectively a Spike: fusion between human and machine,
uploads more numerous than the embodied, immortality. It's not clear why
this takes an extra 70 years to achieve.
Ralph Merkle:[9] while Dr Merkle's special field is
nanotechnology, this plainly has a possible bearing on AI. His is the
standard case, although the timeline is still `fuzzy' , he told me in
January: various computing parameters go about as small as we can
imagine between 2010 and 2020, if Moore's Law holds up. To get there
will require `a manufacturing technology that can arrange individual
atoms in the precise structures required for molecular logic elements,
connect those logic elements in the complex patterns required for a
computer, and do so inexpensively for billions of billions of gates.' So
the imperatives of the computer hardware industry will create
nanoassemblers by 2020 at latest. Choose your own timetable for the
resulting Spike once both nano and AI are in hand.
Has Moravec:[10] multipurpose `universal' robots by 2010, with
`humanlike competence' in cheap computers by around 2039--a more
conservative estimate than Ray Kurzweil's, but astonishing none the
less. Even so, Dr Moravec considers a Vingean singularity as likely
within 50 years.
Michio Kaku: superstring physicist Kaku surveyed some 150
scientists and devised a profile of the next century and farther. He
concludes broadly that from `2020 to 2050, the world of computers may
well be dominated by invisible, networked computers which have the power
of artificial intelligence: reason, speech recognition, even common
sense'.[11] In the next century or two, he expects humanity to achieve a
Type I Kardeshev civilization, with planetary governance and technology
able to control weather but essentially restricted to Earth. Only later,
between 800 and 2500 years farther on, will humanity pass to Type II,
with command of the entire solar system. This projection seems to me
excessively conservative.
Vernor Vinge: his part-playful, part-serious proposal was that
a singularity was due around 2020, marking the end of the human era.
Maybe as soon as 2014.
Eliezer Yudkowsky: once we have a human-level AI able to
understand and redesign its own architecture, there will be a swift
escalation into a Spike. Could be as soon as 2010, with 2005 and 2020 as
the outer limits, if the Singularity Institute, Yudkowski's brainchild
which has now become reality, has anything to do with it (this will be
option [C]). ...maybe he's talking through his hat. Take a look at his
site and decide for yourselves.
[B ii] Increasing computer power will lead to direct augmentation of
human intelligence and other abilities.
Why build an artificial brain when we each have one already? Well, it is
regarded as impolite to delve intrusively into a living brain purely for
experimental purposes, whether by drugs or surgery (sometimes dubbed
`neurohacking'), except if no other course of treatment for an illness
is available. Increasingly subtle scanning machines are now available,
allowing us to watch as the human brain does its stuff, and a few brave
pioneers are coupling chips to parts of themselves, but few expect us to
wire ourselves to machines in the immediate future. That might be
mistaken, however. Professor Kevin Warwick, of Reading University,
successfully implanted a sensor-trackable chip into his arm in 1998. A
year later, he allowed an implanted chip to monitor his neural and
muscular patterns, then had a computer use this information to copy the
signals back to his body and cause his limbs to move; he was thus a kind
of puppet, driven by the computer signals. He plans experiments where
the computer, via similar chips, takes control of his emotions as well
as his actions.[12]
As we gradually learn to read the language of the brain's
neural nets more closely, and finally to write directly back to them, we
will find ways to expand our senses, directly experience distant sensors
and robot bodies (perhaps giving us access to horribly inhospitable
environments like the depths of the oceans or the blazing surface of
Venus). Instead of hammering keyboards or calculators, we might access
chips or the global net directly via implanted interfaces. Perhaps
sensitive monitors will track brainwaves, myoelectricity (muscles) and
other indices, and even impose patterns on our brains using powerful,
directed magnetic fields. Augmentations of this kind, albeit
rudimentary, are already seen at the lab level. Perhaps by 2020 we'll
see boosted humans able to share their thoughts directly with computers.
If so, it is a fair bet that neuroscience and computer science will
combine to map the processes and algorithms of the naturally evolved
brain, and try to emulate it in machines. Unless there actually is a
mysterious non-replicable spiritual component, a soul, we'd then expect
to see a rapid transition to self-augmenting machines--and we'd be back
to path [B i].
[B iii] Increasing computer power and advances in neuroscience will
lead to rapid uploading of human minds.
On the other hand, if [B ii] turns out to be easier than [B i], we would
open the door to rapid uploading technologies. Once the brain/mind can
be put into a parallel circuit with a machine as complex as a human
cortex (available, as we've seen, somewhere 2020 and 2040), we might
expect a complete, real-time emulation of the scanned brain to be run
inside the machine that's copied it. Again, unless the `soul' fails to
port over along with the information and topological structure, you'd
then find your perfect twin (although grievously short on, ahem, a body)
dwelling inside the device.
Your uploaded double would need to be provided with adequate
sensors (possibly enhanced, compared with our limited eyes and ears and
tastebuds), plus means of acting with ordinary intuitive grace on the
world (via physical effectors of some kind--robotic limbs, say, or a
robotic telepresence). Or perhaps your upload twin would inhabit a
cyberspace reality, less detailed than ours but more conducive to being
rewritten closer to heart's desire. Such VR protocols should lend
themselves readily to life as an uploaded personality.
Once personality uploading is shown to be possible and
tolerable or, better still, enjoyable, we can expect at least some
people to copy themselves into cyberspace. How rapidly this new world is
colonised will depend on how expensive it is to port somebody there, and
to sustain them. Computer storage and run-time should be far cheaper by
then, of course, but still not entirely free. As economist Robin Hanson
has argued, the problem is amenable to traditional economic analysis. `I
see very little chance that cheap fast upload copying technology would
not be used to cheaply create so many copies that the typical copy would
have an income near `subsistence' level.'[13] On the other hand, `If you
so choose to limit your copying, you might turn an initial nest egg into
fabulous wealth, making your few descendants very rich and able to
afford lots of memory.'
If an explosion of uploads is due to occur quite quickly after
the technology emerges, early adopters would gobble up most of the
available computing resources. But this assumes that uploaded
personalities would retain the same apparent continuity we fleshly
humans prize. Being binary code, after all (however complicated), such
people might find it easier to alter themselves--to rewrite their source
code, so to speak, and to link themselves directly to other uploaded
people, and AIs if there are any around. This looks like a recipe for a
Spike to me. How soon? It depends. If true AI-level machines are needed,
and perhaps medical nanotechnology to perform neuron-by-neuron,
synapse-by synapse brain scanning, we'll wait until both technologies
are out of beta-testing and fairly stable. That would be 2040 or 2050,
I'd guesstimate.
[B iv] Increasing connectivity of the Internet will allow individuals
or small groups to amplify the effectiveness of their conjoined
intelligence.
Routine disseminated software advances will create (or evolve) ever
smarter and more useful support systems for thinking, gathering data,
writing new programs--and the outcome will be a
`in-one-bound-Jack-was-free' surge into AI. That is the garage band
model of a singularity, and while it has a certain cheesy appeal, I very
much doubt that's how it will happen.
But the Internet is growing and complexifying at a tremendous
rate. It is barely possible that one day, as Arthur C. Clarke suggested
decades ago of the telephone system, it will just... wake up. After all,
that's what happened to a smart African ape, and unlike computers it and
its close genetic cousins weren't already designed to handle language
and mathematics.
[B v] Research and development of microelectromechanical systems (MEMS)
and fullerene-based devices will lead to industrial nanoassembly, and
thence to `anything boxes'.
Here we have the `classic' molecular nanotechnology pathway, as
predicted by Drexler's Foresight Institute and NASA,[14] but also by the
mainstream of conservative chemists and adjacent scientists working in
MEMS, and funded nanotechnology labs around the world. In a 1995 Wired
article, Eric Drexler predicted nanotechnology within 20 years. Is 2015
too soon? Not, surely, for the early stage devices under development by
Zyvex Corporation in Texas, who hope to have at least preliminary
results by 2010, if not sooner.[15] For many years AI was granted huge
amounts of research funding, without much result (until recently, with a
shift in direction and the wind of Moore's Law at its back). Nano is now
starting to catch the research dollars, with substantial investment from
governments (half a billion promised by Clinton; and in Japan, even
Australia) and mega-companies such as IBM. The prospect of successful
nanotech is exciting, but should also make you afraid, very afraid. If
nano remains (or rather, becomes) a closely guarded national secret,
contained by munitions laws, a new balance of terror might take us back
to something like the Cold War in international relations--but this
would be a polyvalent, fragmented, perhaps tribalised balance.
Or building and using nanotech might be like the manufacture of
dangerous drugs or nuclear materials: centrally produced by big
corporations' mints, under stringent protocols (you hope, fearful
visions of Homer Simpson's nuclear plant dancing in the back of your
brain), except for those in Colombia and the local bikers' fortress...
Or it might be a Ma & Pa business: a local plant equal,
perhaps, to a used car yard, with a fair-sized raw materials pool, mass
transport to shift raw or partly processed feed stocks in, and finished
product out. This level of implementation might resemble a small
internet server, with some hundreds or thousands of customers. One might
expect the technology to grow more sophisticated quite quickly, as
minting allows the emergence of cheap and amazingly powerful computers.
Ultimately, we might find ourselves with the fabled anything box in
every household, protected against malign uses by an internal AI system
as smart as a human, but without human consciousness and
distractibility. We should be so lucky. But it could happen that way.
A quite different outcome is foreshadowed in a prescient 1959
novel by Damon Knight, A for Anything, in which a `matter duplicator'
leads not to utopian prosperity for all but to cruel feudalism, a
regression to brutal personal power held by those clever thugs who
manage to monopolise the device. A slightly less dystopian future is
portrayed in Neal Stephenson's satirical but seriously intended The
Diamond Age, where tribes and nations and new optional tetherings of
people under flags of affinity or convenience tussle for advantage in a
world where the basic needs of the many poor are provided free, but with
galling drab uniformity, at street corner matter compilers owned by
authorities. That is one way to prevent global ruination at the hands of
crackers, lunatics and criminals, but it's not one that especially
appeals--if an alternative can be found.
Meanwhile, will nanoassembly allow the rich to get richer--to
hug this magic cornucopia to their selfish breasts--while the poor get
poorer? Why should it be so? In a world of 10 billion flesh-and-blood
humans (ignoring the uploads for now), there is plenty of space for
everyone to own decent housing, transport, clothing, arts, music,
sporting opportunities... once we grant the ready availability of nano
mints. Why would the rich permit the poor to own the machineries of
freedom from want? Some optimists adduce benevolence, others prudence.
Above all, perhaps, is the basic law of an information/knowledge
economy: the more people you have thinking and solving and inventing and
finding the bugs and figuring out the patches, the better a nano minting
world is for everyone (just as it is for an open source computing
world). Besides, how could they stop us?[16] (Well, by brute force, or
in the name of all that's decent, or for our own moral good. None of
these methods will long prevail in a world of free-flowing information
and cheap material assembly. Even China has trouble keeping dissidents
and mystics silenced.)
The big necessary step is the prior development of early nano
assemblers, and this will be funded by university and corporate (and
military) money for researchers, as well as by increasing numbers of
private investors who see the marginal pay-offs in owning a piece of
each consecutive improvement in micro- and nano-scale devices. So yes,
the rich will get richer--but the poor will get richer too, as by and
large they do now, in the developed world at least. Not as rich, of
course, nor as fast. By the time the nano and AI revolutions have
attained maturity, these classifications will have shifted ground.
Economists insist that rich and poor will still be with us, but the
metric will have changed so drastically, so strangely, that we
here-and-now can make little sense of it.
[B vi] Research and development in genomics (the Human Genome Project,
etc) will lead to new `wet' biotechnology, lifespan extension, and
ultimately to transhuman enhancements.
This is a rather different approach, and increasingly I see experts
arguing that it is the short-cut to mastery of the worlds of the very
small and the very complex. Biology, not computing! is the slogan. After
all, bacteria, ribosomes, viruses, cells for that matter, already
operate beautifully at the micro- and even the nano-scales.
Still, even if technology takes a major turn away from
mechanosynthesis and `hard' minting, this approach will require a vast
armory of traditional and innovative computers and appropriately
ingenious software. The IBM petaflop project Blue Gene (doing a
quadrillion operations a second) will be a huge system of parallel
processors designed to explore protein folding, crucial once the genome
projects have compiled their immense catalogue of genes. Knowing a
gene's recipe is little value unless you know, as well, how the protein
it encodes twists and curls in three-dimensional space. That is the
promise of the first couple of decades of the 21st century, and it will
surely unlock many secrets and open new pathways.
Exploring those paths will require all the help molecular
biologists can get from advanced computers, virtual reality displays,
and AI adjuncts. Once again, we can reasonably expect those paths to
track right into the foothills of the Spike. Put a date on it? Nobody
knows--but recall that DNA was first decoded in 1953, and by around half
a century later the whole genome will be in the bag. How long until the
next transcendent step--complete understanding of all our genes, how
they express themselves in tissues and organs and abilities and
behavioural bents, how they can be tweaked to improve them dramatically?
Cautiously, the same interval: around 2050. More likely (if Moore's law
keeps chugging along), half that time: 2025 or 2030.
The usual timetable for the Spike, in other words.
[C] The Singularity happens when we go out and make it happen.
That's Eliezer Yudkowsky's sprightly, in-your-face declaration of
intent, which dismisses as uncomprehending all the querulous cautions
about the transition to superintelligence and the Singularity on its far
side.[17]
Just getting to human-level AI, this analysis claims, is enough
for the final push to a Spike. How so? Don't we need unique competencies
to do that' Isn't the emergence of ultra-intelligence, either
augmented-human or artificial, the very definition of a Vingean singularity?
Yes, but this is most likely to happen when a system with the
innate ability to view and reorganise its own cognitive structure gains
the conscious power of a human brain. A machine might have that
facility, since its programming is listable, you could literally print
it out--in many, many volumes--and check each line. Not so an equivalent
human, with our protein spaghetti brains, compiled by gene recipes and
chemical gradients rather than exact algorithms; we clearly just can't
do that.
So intelligent design turned back upon itself, a cascading
multiplier that has no obvious bounds. The primary challenge becomes
software, not hardware. The raw petaflop end of the project is chugging
along nicely now, mapped by Moore's Law, but even if it tops out, it
doesn't matter. A self-improving seed AI could run glacially slowly on a
limited machine substrate. The point is, so long as it has the capacity
to improve itself, at some point it will do so convulsively, bursting
through any architectural bottlenecks to design its own improved
hardware, maybe even build it (if it's allowed control of tools in a
fabrication plant). So what determines the arrival of the Singularity is
just the amount of effort invested in getting the original seed software
written and debugged.
This particular argument is detailed in Yudkowsky's ambitious
web documents `Coding a Transhuman AI', `Singularity Analysis' and `The
Plan to Singularity'. It doesn't matter much, though, whether these
specific plans hold up under detailed expert scrutiny; they serve as a
accessible model for the process we're discussing.
Here we see conventional open-source machine intelligence,
starting with industrial AI, leading to a self-rewriting seed AI which
runs right into takeoff to a singularity. You'd have a machine that
combines the brains of a human (maybe literally, in coded format,
although that is not part of Yudkowsky's scheme) with the speed and
memory of a shockingly fast computer. It won't be like anything we've
ever seen on earth. It should be able to optimise its abilities,
compress its source code, turn its architecture from a swamp of mud huts
into a gleaming, compact, ergonomic office (with a spa and a bar in the
penthouse, lest we think this is all grim earnest).[18] Here is quite a
compelling portrait of what it might be like, `human high‑level
consciousness and AI rapid algorithmic performance combined
synergetically,' to be such a machine:
Combining Deep Blue with Kasparov... yields a Kasparov
who can wonder `How can I put a queen here?' and blink out for a
fraction of a second while a million moves are automatically examined.
At a higher level of integration, Kasparov's conscious perceptions of
each consciously examined chess position may incorporate data culled
from a million possibilities, and Kasparov's dozen examined positions
may not be consciously simulated moves, but `skips' to the dozen most
plausible futures five moves ahead.[19]
Such a machine, we see, is not really human-equivalent after all. If it
isn't already transhuman or superhuman, it will be as soon as it has
hacked through its own code and revised it (bit by bit, module by
module, making mistakes and rebooting and trying again until the whole
package comes out right). If that account has any validity, we also see
why the decades-long pauses in the time-tables cited earlier are
dubious, if not preposterous. Given a human-level AI by 2039, it is not
going to wait around biding its time until 2099 before creating a
discontinuity in cognitive and technological history. That will happen
quite fast, since a self-optimising machine (or upload, perhaps) will
start to function so much faster than its human colleagues that it will
simply leave them behind, along with Moore's plodding Law. A key
distinguishing feature, if Yudkowsky's analysis is sound, is that we
never will see HAL, the autonomous AI in the movie 2001. All we will see
is AI specialised to develop software.
Since I don't know the true shape of the future any more than
you do, I certainly don't know whether an AI or nano-minted Singularity
will be brought about (assuming it does actually occur) by careful,
effortful design in an Institute with a Spike engraved on its door, by a
congeries of industrial and scientific research vectors, or by military
ambitions pouring zillions of dollars into a new arena that promises
endless power through mayhem, or mayhem threatened.
It does strike me as excessively unlikely that we will skid to
a stop anytime soon, or even that a conventional utopia minus any
runaway singularity sequel (Star Trek's complacent future, say) will
roll off the mechanosynthesising assembly line. [20]
Are there boringly obvious technical obstacles to a Spike?
Granted, particular techniques will surely saturate and pass through
inflexions points, tapering off their headlong thrust. If the past is
any guide, new improved techniques will arrive (or be forced into
reality by the lure of profit and sheer curiosity) in time to carry the
curves upward at the same acceleration. If not? Well, then, it will take
longer to reach the Spike, but it is hard to see why progress in the
necessary technologies would simply stop.
Well, perhaps some of these options will become technically
feasible but remain simply unattractive, and hence bypassed. Dr Russell
Blackford, a lawyer, former industrial advocate and literary theorist
who has written interestingly about social resistance to major
innovation, notes that manned exploration of Mars has been a technical
possibility for the past three decades, yet that challenge has not been
taken up. Video-conferencing is available but few use it (unlike the
instant adoption of mobile phones). While a concerted program involving
enough money and with widespread public support could bring us conscious
AI by 2050, he argues, it won't happen. Conflicting social priorities
will emerge, the task will be difficult and horrendously expensive. Are
these objections valid? AI and nano need not be impossibly hard and
costly, since they will flow from current work powered by Moore's Law
improvements. Missions to Mars, by contrast, have no obvious social or
consumer or even scientific benefits beyond their simple feel-good
achievement. Profound science can be done by remote vehicles. By
contrast, minting and AI or IA will bring immediate and copious benefits
to those developing them--and will become less and less expensive, just
as desktop computers have.
What of social forces taking up arms against this future? We've
seen the start of a new round of protests and civil disruptions aimed at
genetically engineered foods and work in cloning and genomics, but not
yet targeted at longevity or computing research. It will come,
inevitably. We shall see strange bedfellows arrayed against the
machineries of major change. The only question is how effective its
impact will be.
In 1999, for example, emeritus professor Alan Kerr, winner of
the lucrative inaugural Australia Prize for his work in plant pathology,
radio-broadcast a heartfelt denunciation of the Green's adamant
opposition to new genetically engineered crops that allow use of
insecticide to be cut by half. Some aspects of science, though, did
concern Dr Kerr. He admitted that he'd been `scared witless' by the
`thesis is that within a generation or two, science will have conquered
death and that humans will become immortal. Have you ever thought of the
consequences to society and the environment of such an achievement? If
you're anything like me, there might be a few sleepless nights ahead of
you. Why don't the greenies get stuck into this potentially horrifying
area of science, instead of attacking genetic engineering with all its
promise for agriculture and the environment?'[21] This, I suspect, is a
short-sighted and ineffective diversionary tactic. It will arouse
confused opposition to life extension and other beneficial on-going
research programs, but will lash back as well against any ill-understood
technology.
Cultural objections to AI might emerge, as venomous as
yesterday's and today's attacks on contraception and abortion rights, or
anti-racist struggles. If opposition to the Spike, or any of its
contributing factors, gets attached to one or more influential
religions, that might set back or divert the current. Alternatively,
careful study of the risks of general assemblers and autonomous
artificial intelligence might lead to just the kinds of moratoriums that
Greens now urge upon genetically engineered crops and herds. Given the
time lag we can expect before a singularity occurs--at least a decade,
and far more probably two or three--there's room for plenty of informed
specialist and public debate. Just as the basic technologies of the
Spike will depend on design-ahead projects, so too we'll need a kind of
think-ahead program to prepare us for changes that might, indeed, scare
us witless. And of course, the practical impact of new technologies
condition the sorts of social values that emerge; recall the subtle
interplay between the oral contraceptive pill and sexual mores, and the
swift, easy acceptance of in vitro conception.
Despite these possible impediments to the arrival of the Spike,
I suggest that while it might be delayed, almost certainly it's not
going to be halted. If anything, the surging advances I see every day
coming from labs around the world convince me that we already are racing
up the lower slopes of its curve into the incomprehensible.
In short, it makes little sense to try to pin down the future.
Too many strange changes are occurring already, with more lurking just
out of sight, ready to leap from the equations and surprise us. True AI,
when it occurs, might rush within days or months to SI
(superintelligence), and from there into a realm of Powers whose motives
and plans we can't even start to second-guess. Nano minting could go
feral or worse, used by crackpots or statesmen to squelch their foes and
rapidly smear us all into paste. Or sublime AI Powers might use it to
the same end, recycling our atoms into better living through
femtotechnology.
The single thing I feel confident of is that one of these
trajectories will start its visible run up the right-hand side of the
graph within 10 or 20 years, and by 2030 (or 2050 at latest) will have
put everything we hold self-evident into question. We will live forever;
or we will all perish most horribly; our minds will emigrate to
cyberspace, and start the most ferocious overpopulation race ever seen
on the planet; or our machines will Transcend and take us with them, or
leave us in some peaceful backwater where the meek shall inherit the
Earth. Or something else, something far weirder and... unimaginable.
Don't blame me. That's what I promised you.
> Eliezer insists on hard takeoff, Singularity Utopia insists everything
> will be just fine, I insist we just don't know and we can't know for sure.
> In the past week, I have come up with a number of different
> possibilities. One just occurred to me this morning. Here goes:
Nice projection, Spike. I might have to steal that for a story... :)
Damien Broderick
More information about the extropy-chat
mailing list