[extropy-chat] Bluff and the Darwin award

Samantha Atkins sjatkins at mac.com
Tue May 16 21:58:32 UTC 2006


On May 16, 2006, at 1:00 PM, Russell Wallace wrote:

> On 5/16/06, Samantha Atkins <sjatkins at mac.com> wrote:
> Surely you jest.  Given a >human AI capable of self-improvement a  
> hard take-off is at least possible.
>
> "Hard takeoff" as the term has been previously used typically  
> denotes a process in which superintelligent AI is supposed to come  
> about, not something that is supposed to happen after such AI is  
> achieved by other means - what do you understand the term to mean?


I understand it to be what happens after such greater than human  
intelligence come about.  I don't see any way to hard takeoff without  
this.

- samantha

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20060516/224253dc/attachment.html>


More information about the extropy-chat mailing list