[ExI] MAX MORE in Second Life yesterday

nvitamore at austin.rr.com nvitamore at austin.rr.com
Fri Jun 13 22:21:24 UTC 2008

From: Michael Anissimov 

On Fri, Jun 13, 2008 at 6:24 AM, Natasha Vita-More <natasha at natasha.cc>

>>Further that "humans as they are known today are no longer significant in
>>shaping the world" is assumptive based on single-track thinking that SI
>>occur outside of human and that humans will not be a part of the SI

>Natasha, the possibility of a hard-takeoff AI operating largely independent
>of humans is a real possibility, not single-track thinking.

Yes, it is a possibility of course. I was not saying that it is not a real
possibility.  I was saying in response to Keith that suggesting it to be a
given/only situation is a single-track.   

>If someone created a smarter-than-human AI, it might be easier for the AI
>initially improve upon itself recursively, rather than enhance the
>intelligence of other humans.  This would be for various reasons: it would
>have complete access to its own source code, its cognitive elements would
>operating much faster, it could extend itself onto adjacent hardware, etc.

Sure.  But that would not prevent a human from teaming up with the AI and

>Even a friendly superintelligent AI might decide that it's easiest to help
>humans by improving itself quickly, then actually offering help only after
>it has reached an extremely high level of capability.

Sure. But that would not prevent a human from treaming up with the AI and

>Are you familiar with the general arguments for hard takeoff AI?  

Yes of course.

>>My earnest approach to the future is the combined effort of the human
>>and technological innovation in enhancing human to merge with SI through
>>stages of development and not one big event that occurs overnight.

>This may be your preference, but it may turn out to simply be
>technologically easier to create a self-improving AI first.  Sort of like
>how I might like to say that I'd prefer for fossil fuels to be replaced by
>solar power, but replacing them with nuclear seems far simpler

Yes, it might.  And it might be a good idea.  But, again, that does no
prevent the other option.  Actually it might help it.  Bty, I wrote a paper
two years ago on this. It is published in some journal in the UK. Peter
Voss worked with me on it.  It is about the human and the AI.  You should
read it.  

Oh, this is timely ... I am presenting another paper in Vienna in 3 weeks
on the Singularity at Universitat Fur Angewandte Kunst Wien. 
http://www.dieangewandte.at/  The title is "The Mediated Technological
Singularity:  Human Use as a Passport to Technological Innovation".

I'll pose some quesitons to the list about content next week.


myhosting.com - Premium Microsoft® Windows® and Linux web and application
hosting - http://link.myhosting.com/myhosting

More information about the extropy-chat mailing list