[extropy-chat] SETI: Long Now vs singularity

Spike spike66 at comcast.net
Sat Jul 10 07:04:22 UTC 2004


 
> natashavita at earthlink.net
...
> "7/9: The Long Now Presents: Seminars About Long-term Thinking 
> 
> Reply to: services at longnow.org
> Date: 2004-06-23, 1:52PM PDT
> 
> 
> SETI researcher JILL TARTER will give a talk titled "The Search for
> Extraterrestrial Intelligence---A Necessarily Long-term Strategy."
...


I went to this talk in order to find out how the Long Now
people relate to singularitarian notions.  Oh my, I was not
disappointed at all, no.  This was an exceeeedingly interesting
pitch by Dr. Tarter.

The pen that was to take notes was no good, consequently I
shall hafta go on failing memory alone.  Do forgive the
impressionistic view of the talk.

Tonight's crowd included the usual suspects; those that
hang out at these kinds of things.  I saw several familiar
faces, even if I couldn't call out their names.  An interesting
observation is that Dr. Tarter used the terms singularity and 
Matrioshka Brains without feeling the need to explain them at 
all.  She had a slide labelled Matrioshka Brain which looked
a lot like the S-brain that we discussed here about a year
ago.  She had a very interesting spin on the singularity.
The following discussion is based on her ideas with some
extrapolation supplied by me.  Assume everything smart is
hers, any dumb, nutty or not-well-thought-out notion is
my mistaken extrapolation.  

In this forum I have expressed the notion that we need to
somehow map out all the possible scenarios for future
technology.  Yudkowskian hard-takeoff AI is only one
such scenario, one which we have yakkity yakked about
to great lengths here and on SL4, but there are other 
possible futures which should be considered as well.  Perhaps
I will try again to create a matrix or map of all possible
futures.

Dr. Tarter gave a simplified map of possible futures as they
relate to SETI.  Her possible maps of the future are given by
three general scenarios: 1) technology is short-lived,  
2) singularity and 3) S-curve.  (The terminology here is mine, 
altho Dr. Tarter did use the term singularity for scenario 2.)

Her interest in the future of technology is in how it
relates to the success or failure of SETI, a different
spin than we usually have here.  Dr. Tarter showed the three
scenarios with technology on the vertical axis and time on
the horizontal.  We pretty much understand the history of
technology: essentially flat at approximately zero for 3e9 
years, then the curve suddenly spikes upward.  You are here.

In scenario 1, technology is short lived, so we destroy
ourselves and the other beasts that might otherwise
evolve technology sometime real soon now, perhaps in the
next few hundred years, with nukes or runaway nanotech, etc.  
The graph looks like a flat line with one or more curious blip.  
This is uninteresting from SETI's view, because that means that
we will not be able to find ETI by current means, since
radio detectable life is short-lived.

Scenario 2, singularity.  The line is flat at zero for
3 billion years, then suddenly spikes up and keeps going
up indefinitely.  Again, uninteresting from SETI's view,
for a post-singularity AI is probably undetectable using
current means.  We wouldn't recognize a post-singularity
AI if we saw it, and even so, we probably couldnt find it
using radio telescopes, since such lifeforms would most 
likely already be here in the form of... well, fill in the 
blank.  Yudkowskian commentary welcome here.

Scenario 3, S curve.  The technology vs time line is flat
for 3E9 years, then suddenly jumps upward, then somehow
reaches a new equilibrium at some other level.  I confess that 
it totally baffles me to imagine how technological development
could somehow flatten out at some level somewhere higher than 
the one we know so well.  It really seems absurd.  But consider
that scenario 2 assumes a sort of process that does not saturate,
something that has never been seen in any natural process that
has ever been observed.  In nature, eeeeeverything saturates or 
establishes a new equilibrium somewhere.

Scenario 3 is the one that most interests the SETI people, for 
if and only if technology inherently establishes a new equilibrium 
at some level within our paltry cognitive grasp can ETI ever be 
observed or even recognized.  There are subsets of scenario 3 
where a new technological equilibrium can be established, but 
the level of intelligence is so advanced that we cannot recognize 
it as ETI.  Examples would be an M-brain or S-brain or a more
limited version of that, a Jupiter sized utility fog.

spike

Anyone who wants to, feel free to cross post this to SL4.








  




More information about the extropy-chat mailing list