[extropy-chat] Singularitarian verses singularity

Samantha Atkins sjatkins at mac.com
Wed Dec 21 21:21:19 UTC 2005


On Dec 21, 2005, at 10:23 AM, The Avantguardian wrote:

>
>
> --- Samantha Atkins <sjatkins at mac.com> wrote:
>> I am of the opinion that human level intelligence is
>> increasingly
>> insufficient to address the problems we face,
>> problems whose solution
>> is critical to our continuing viability.   Thus I
>> see >human
>> intelligence as essential for the survival of
>> humanity (or whatever
>> some or all of us choose to become when we have such
>> choice).   Yes
>> the arrival of >human intelligence poses dangers.
>> But I think it is
>> also the only real chance we have.
>
> How can you be so certain of that super-human
> intelligence (artifical or otherwise) is going to be
> the "savior of mankind"?

I'm not and I certainly said no such thing.  I said we humans are  
reaching, if we haven't reached already, the limits of what our  
intelligence can successfully deal with.  Ergo, we need >human  
intelligence to continue successfully.  We can get some of what is  
needed by augmenting our own intelligence including our group  
intelligence.  But the current means we have of doing so are too few  
and of insufficient utility.


>
> Don't get me wrong, I like this list. I find it
> entertaining and informational and have grown quite
> fond of some the posters. Sure some of the individuals
> on this list are more accomplished than others, but I
> don't think the list is collectively living up to its
> full potential. If high IQ people can be used as an
> indicator of the nature of super-intelligence, then we
> might very well be screwed.
\
This agrees with my point.  But we should not anthropomorphize non- 
human intelligence.

> If we, being as human as
> we are, can be so apathetic about the survival of our
> own species, why would anyone believe that some mighty
> non-human intelligence would give a rat's ass about
> us?

Good question.  But not relevant as to whether we must build such  
intelligence if we are to have much of a chance of continuing at all.

> How do we know that the most likely scenario for
> the Singularity is that when the AI boots up, it
> doesn't take a look around, decides we are not worth
> the effort to save, and decides to write angst-ridden
> haiku and solve crossword puzzles all day.

I doubt very much that angst or crossword puzzles are part of its  
goal structure.

- samantha

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20051221/21176c7c/attachment.html>


More information about the extropy-chat mailing list