[ExI] Hard Takeoff

Samantha Atkins sjatkins at mac.com
Mon Nov 15 17:14:02 UTC 2010


On Nov 14, 2010, at 9:52 AM, Michael Anissimov wrote:

> On Sun, Nov 14, 2010 at 9:03 AM, Stefano Vaj <stefano.vaj at gmail.com> wrote:
> 
> I still believe that seeing the Singularity as an "event" taking place
> at a given time betrays a basic misunderstanding of the metaphor, ony
> too open to the sarcasm of people such as Carrico.
> 
> If we go for the original meaning of "the point in the future where
> the predictive ability of our current forecast models and
> extrapolations obviously collapse", it would seem obvious that the
> singularity is more of the nature of an horizon, moving forward with
> the perspective of the observer, than of a punctual event.
> 
> We have some reason to believe that a roughly human-level AI could rapidly improve its own capabilities, fast enough to get far beyond the human level in a relatively short amount of time.  The reason why is that a "human-level" AI would not really be "human-level" at all -- it would have all sorts of inherently exciting abilities, simply by virtue of its substrate and necessities of construction:

While it "could" do this it is not at all certain that it would.  Humans can improve themselves even today in a variety of ways but very few take the trouble.  An AGI that is not autonomous would do what it was told to do by its owners who may or may not have improving it drastically as a high priority.   

> 
> 1.  ability to copy itself
> 2.  stay awake 24/7

Possibly, depending on its long term memory and integration model.  If it came from human brain emulation this is less certain.

> 3.  spin off separate threads of attention in the same mind

This very much depends on the brain architecture.  If too close a copy of human brains this may not be the case.

> 4.  overclock helpful modules on-the-fly

Not sure what you mean by this but this is very much a question of specific architecture rather than general AGI.

> 5.  absorb computing power (humans can't do this)

What does this mean?  Integrate other systems?  How? To what level?  Humans do some degree of this all the time.

> 6.  constructed from scratch with self-improvement in mind

It could be so constructed but may or may not in fact be so constructed.

> 7.  the possibility of direct integration with new sensory modalities, like a codic modality

I am not sure exactly what is meant by this.  That it is very very good at understanding code amounts to a 'modality'?

> 8.  the ability to accelerate its own thinking speed depending on the speed of available computers
> 

This assumes an ability to integrate random other computers that I do not think is at all a given.


> When you have a human-equivalent mind that can copy itself, it would be in its best interest to rent computing power to perform tasks.  If it can make $1 of "income" with less than $1 of computing power, you have the ingredients for a hard takeoff.

This is simple economics.  Most humans don't take advantage of the many such positive sum activities they can perform today without such self-copying abilities.  So why is it certain that an AGI would?

> 
> There is an interesting debate to be had here, about the details of the plausibility of the arguments, but most transhumanists just seem to dismiss the conversation out of hand, or don't know that there's a conversation to have.  
> 

Statements about "most transhumanists" are fraught with many problems. 

> Many valuable points are made here, why do people always ignore them?

'We' don't.

> 
> http://singinst.org/upload/LOGI//seedAI.html
> 
> Prediction: most comments in response to this post will again ignore the specific points in favor of a rapid takeoff and simply dismiss the idea based on low intuitive plausibility. 
>  

Well, that helps a lot.  It is a form of calling those who disagree lazy or stupid before they even voice their disagreement.


> The Singularity as an incumbent rapture - or
> doom-to-be-avoided-by-listening-to-prophets, as it seems cooler to
> many to present it these days - can on the other hand easily
> deconstructed as a secularisation of millennarist myths which have
> plagued western culture since the advent of monotheism.
> 
> We have real, evidence-based arguments for an abrupt takeoff.  One is that the human speed and quality of thinking is not necessarily any sort of optimal thing, thus we shouldn't be shocked if another intelligent species can easily surpass us as we surpassed others.  We deserve a real debate, not accusations of monotheism.


No, you don't have air tight evidence.  You have a reasonable argument for it. 

- samantha
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20101115/b55aac35/attachment.html>


More information about the extropy-chat mailing list