[extropy-chat] Collective Singularities (Was: Desirability of Singularity)

Anders Sandberg asa at nada.kth.se
Mon Jun 5 01:22:39 UTC 2006


Damien Sullivan wrote:
> On Sun, Jun 04, 2006 at 03:38:25PM -0700, Eliezer S. Yudkowsky wrote:
>> Harry Harrison wrote:
>> > The effects of intelligence are otherwise limited to optimisations on
>> > energy inputs and we're not so far from the thermodynamic limit
>> > already.
>
> The Singularity of the evolution of Homo sapiens probably had more to do
> with structural changes and developing language than with simply using
> more energy.  Then again, I'm skeptical that any phase changes like that
> exist in the future.  OTOH, digital intelligence, with benefits of long
> life and copying and cognitive engineering might well give something one
> could call a Singularity.

The "language singularity" was really about rapid spread of information
and cumulative storage of it within human groups. Possibly we could speak
of a second "writing singularity" too when the cumulative storage got more
resilient - that really accelerated huamn growth, although the adjoining
(and somewhat earlier) introduction of agriculture enabling and
stimulating human population growth helped a lot.

So maybe we can look for factors increasing individual cognitive capacity,
increasing the knowledge/ability accumulation, decrease cooperation
overheads and increase cooperation synergies. And increased total
population.

Increasing individual cognitive capacity has a big multiplicative effect
on the economy, according to some of my current analysis (stay tuned :-).
Just a small increase (a few points) in "mean IQ" would imply a doubling.
On the other hand, we do not know how easy that is to achieve - in terms
of nootropics we have no clue, yet we might have environmental Flynn
effect like phenomena right now. I guess this is the core of the
traditional singularity scenario, everybody becomes really smart.

Increasing accumulation seems to be getting better, since we are
approaching a world where everything people experience gets recorded,
becomes distributable and searchable. We have a bit to go (6e9 people x
1e10 bits/s = 6e19 bit/s of raw sensorium, a 1 megapixel 24 bit 1000 Hz
gnatbot eye every square meter of Earth is 1e25 bit/s). Today we produce a
few exabytes formal information (paper, digital etc) per year, just about
3e11 bit/s. Just 8 orders of magnitude of discrepancy between human
experience and what is being made storable. Maybe the next singularity
will not be about intelligence per se, but just very good storage and
search. A Google singularity, where the effective intelligence of humanity
is amplified just by a very good collective memory able to learn from
collective experience efficiently.

Decreasing cooperation overheads not only allows more efficient
cooperation but also larger cooperative groups, enabling larger skillsets
to be applied. This might enable dealing with new parts of problem space.
An open source singularity?

Increased cooperation synergies would mean that we also get better at
cooperating like the previous case, but also receive stronger incentives
to cooperate. This might be the borganism or EarthWeb take on the
singularity.

Finally we have larger population: more raw brainpower. The non-hard
takeoff AI singularity and Robin's upload economy fit in here: more brains
and creators can be made as desired and needed.

These are all very collectivist singularities involving lots of agents
rather than the apotheosis of the hard takeoff. I guess they are "swells"
rather than "spikes".

It seems worthwhile to try to figure out which of these factors might be
most amenable to being changed in the near future; right now the obvious
money is on storage, but I don't know how well data mining will be able to
scale along it. The intelligence factor might be tricky but is being
studied rather intently. Reduced overhads and increased synergies might be
the wildcards: they are in line with many internet phenomena and much work
in human computer interfaces, management and spontaneous orders, but it
seems rather unpredictable whether we will get any breakthroughs (maybe
they can be measured in terms of the tech driven productivity growth that
isn't attributable to individual performance increases?). The population
singularity is far off right now, it will become relevant only when
infomorphs start to have nontrivial brainpower but will then take off very
fast.

The desirability of these collective singularities probably depends quite
a bit on the particular kind. I find Robins upload economy a bit worrying
(awfully high Gini coefficient there), the cooperation enabling
singularities might be very nice or borg vs. borg, the high
intelligence/capability individual one might be messy due to rare but
highly destructive individuals, and the google singularity might be a
transparent world, something simultaneously goodbad.

In general I think my optimism is based on the assumption/historical
observation that on the whole humanity produces more wealth (both
material, intellectual and emotional) than it destroys and that past
singularities (if we call them that) have amplified this process. This may
be because they were guided by human interests, and the next might have
significant nonhuman actors with nonhuman interests. However, since it is
a good chance that these actors will at least at the start be influenced
by (or even imprinted on) human values even that situation has a fairly
nonzero chance of being along previous lines.

I guess the key question here is to identify some of the more clear
attractors that are nasty and find proactionary ways of avoiding them.
Just as Frindly AI is about avoiding a nasty hard takeoff singularity we
might want to come up with "friendly borganism" or "compassionate upload
wage slavery" to fix other potential attractors. :-)

-- 
Anders Sandberg,
Oxford Uehiro Centre for Practical Ethics
Philosophy Faculty of Oxford University





More information about the extropy-chat mailing list