[extropy-chat] Singularitarian verses singularity

Samantha Atkins sjatkins at mac.com
Thu Dec 22 03:24:45 UTC 2005


On Dec 21, 2005, at 2:02 PM, Robert Bradbury wrote:

>
>
> On 12/21/05, Samantha Atkins <sjatkins at mac.com> wrote:
>> If we, being as human as
>> we are, can be so apathetic about the survival of our
>> own species, why would anyone believe that some mighty
>> non-human intelligence would give a rat's ass about
>> us?
>
> Good question.  But not relevant as to whether we must build such  
> intelligence if we are to have much of a chance of continuing at all.
>
> Actually Samantha,  I view there as being few, if any, barriers to  
> current instantiation humans ensuring the "survival of our own  
> species".  We do *not* need to become more intelligent.  We need to  
> use the intelligence we have at our disposal more effectively or  
> creatively.  For example -- the full nanotech vision can be  
> realized using current human intelligence.  It is essentially  
> nothing more than a re-execution of the development of the  
> semiconductor industry over the last 20-30 years.  We already have  
> various means with regard to detecting and reducing most short term  
> hazards (NEOs, etc.).  We have several billion years how to figure  
> out what to do in the longer term ( i.e. when the sun exhausts its  
> fuel supply).


I am surprised that you take this position.  Of all people I suspect  
that you are very aware of the limits of human intelligence and  
decision making both individually and in groups.    You say we need  
to use or intelligence more effectively and creatively.  Of course I  
can't argue with that.  But exactly how will we overcome the barriers  
to such greater effectiveness and how big a difference can we  
reasonably expect it to make?    While nanotech can be achieved with  
current human intelligence it seems very doubtful to me that current  
levels of human intelligence are capable of dealing with the  
consequences.

We have no really effective means of fully detecting all NEOs much  
less of dealing with them for what it's worth.

We do not have "several billion years" and I am amazed to see you  
make such a statement.  The world is becoming much smaller and its  
management becoming much more complex.  The information load and  
decision complexity is exploding.   The speed at which problems  
develop and play out is increasing.  We face some quite critical  
challenges in energy, environment, human freedom, economics and  
massive rewiring of most institutions in the face of accelerating and  
continuous change.  Humans show signs of being more inclined to get  
some "ole time religion" rather than to become more rational and  
efficient in applying their intelligence.   I see no reason to be  
sanguine about the prospects of currently existing human beings and  
human intelligence being adequate to the challenges we face and that  
are predictable much less the ones we cannot predict.

>
> And so one must ask if one is going to rely on a "super- 
> intelligence" to "save" us -- how does one guarantee that a super- 
> intelligence is (or will remain) so altruistic that our "optimal"  
> survival will be the end result?

Again I never suggested any such thing.  Perhaps his is yet another  
example of the limits of human intelligence.  :-)

> Indeed one could make the case that any entity which is not acting  
> primarily with its own survival as its primary goal isn't really  
> "intelligent".  (The male spiders which sacrifice themselves to  
> contribute protein to female spiders so as to increase numbers of  
> offspring come to mind...)

Ah yes.  And how deep do the patterns of human EP go in largely  
determining out behavior and limiting our adaptability?

- samantha


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20051221/e496ba35/attachment.html>


More information about the extropy-chat mailing list