[extropy-chat] Singularitarian verses singularity

Robert Bradbury robert.bradbury at gmail.com
Wed Dec 21 22:02:29 UTC 2005


On 12/21/05, Samantha Atkins <sjatkins at mac.com> wrote:

> If we, being as human as
> we are, can be so apathetic about the survival of our
> own species, why would anyone believe that some mighty
> non-human intelligence would give a rat's ass about
> us?
>
>
> Good question.  But not relevant as to whether we must build such
> intelligence if we are to have much of a chance of continuing at all.
>

Actually Samantha,  I view there as being few, if any, barriers to current
instantiation humans ensuring the "survival of our own species".  We do
*not* need to become more intelligent.  We need to use the intelligence we
have at our disposal more effectively or creatively.  For example -- the
full nanotech vision can be realized using current human intelligence.  It
is essentially nothing more than a re-execution of the development of the
semiconductor industry over the last 20-30 years.  We already have various
means with regard to detecting and reducing most short term hazards (NEOs,
etc.).  We have several billion years how to figure out what to do in the
longer term (i.e. when the sun exhausts its fuel supply).

Could a single human ensure our survival?  I doubt it.  Could we as a
group?  Quite probably.  The question becomes *who* survives and whether
whatever strategies are followed are "optimal"?  This goes back to the
famous Star Trek question -- when do "The needs of the many outweigh the
needs of the few (or the one)."

And so one must ask if one is going to rely on a "super-intelligence" to
"save" us -- how does one guarantee that a super-intelligence is (or will
remain) so altruistic that our "optimal" survival will be the end result?
Indeed one could make the case that any entity which is not acting primarily
with its own survival as its primary goal isn't really "intelligent".  (The
male spiders which sacrifice themselves to contribute protein to female
spiders so as to increase numbers of offspring come to mind...)

How do we know that the most likely scenario for
> the Singularity is that when the AI boots up, it
> doesn't take a look around, decides we are not worth
> the effort to save, and decides to write angst-ridden
> haiku and solve crossword puzzles all day.
>
>
> I doubt very much that angst or crossword puzzles are part of its goal
> structure.
>

Depends.  If it is intelligent enough to compute the final outcomes for
"life" (which is a distinct possibility) then haiku & crossword puzzles may
not be such a bad way to burn time.  Certainly no better or worse than
ensuring the survival of, or information content of, states of matter which
are recognized as being inherently suboptimal or whose future paths can
easily be predicted.

Robert
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20051221/86dd3929/attachment.html>


More information about the extropy-chat mailing list