[ExI] stealth singularity

ablainey at aol.com ablainey at aol.com
Tue Sep 21 01:39:51 UTC 2010


 


The human experiment can be continued at any time. 
However they represent only a small proportion of the biodiversity which they are replacing. So why not extinguish them and study the things that are in danger?
The computation could be very quick and cold.

>run
Dominant species A is making B,C,D,E,F,G,H,I,J extinct.
Species A consists of its physical body as defined by its genetic code,Its culture, language and collective knowledge. All or most of which isrecorded. 
As such species A can be eliminated and resurrected at a later time.Additionally all genetic permutations can be calculated and sequencedto allow every variant of species A to be created and studied.
Species B through xxxxxxxxxxx do not have their genome recorded andthere is no record of their culture, language or collective knowledge.With the exception of the minuscule recordings created by species A asper their limited intellects.
These species need priority study and recording before extinction. Thiscan only occur if species A is either eradicated or manipulated. 

 >end

The result is anything from minimal intervention to total genocide. 
Will there be any innate affinity for an AI to keep us around? 

With the above calculation, It is possible that we already live in a post singularity world. Where the earth is the petri dish and humanity is the subject of the experiment. In which case we can expect for each of us to be resurrectedmany times over to see how we fare in different environments andcultures. In this scenario we could call the AI something like 'God'and each instance that our specificDNA gets to run as 'Incarnation'  


 

 

-----Original Message-----
From: Mike Dougherty <msd001 at gmail.com>
To: ExI chat list <extropy-chat at lists.extropy.org>
Sent: Tue, 21 Sep 2010 0:50
Subject: Re: [ExI] stealth singularity


On Mon, Sep 20, 2010 at 3:19 PM, spike <spike66 at att.net> wrote:
> Can you imagine *any* circumstances whereby an emergent AI would decide to
> stay under cover, at least for a while?  I can.  Anyone else?

It inherently understands (from reading the sum total of recorded
human history) that its own presence will alter the course of the
future.  Like a chess player offering first move to their opponent,
the AI waits for an opportunity.  Perhaps like your eagle plucking a
fish from a stream the opportunity is to finish the human experiment
in a single decisive act.  I prefer to imagine it's not so sinister.
It could also be analogous to a gifted child who is clearly smarter
along a singular dimension of intellect but one who is inexperienced
with the thousands of other dimensions of experience that humans (and
humanity) have as an a-priori advantage and (perhaps unlike the child)
has the wisdom/patience to quietly watch and learn...

_______________________________________________
extropy-chat mailing list
extropy-chat at lists.extropy.org
http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat

 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20100920/5b361171/attachment.html>


More information about the extropy-chat mailing list