[ExI] MAX MORE in Second Life yesterday
michaelanissimov at gmail.com
Fri Jun 13 21:58:47 UTC 2008
On Fri, Jun 13, 2008 at 6:24 AM, Natasha Vita-More <natasha at natasha.cc>
> Further that "humans as they are known today are no longer significant in
> shaping the world" is assumptive based on single-track thinking that SI will
> occur outside of human and that humans will not be a part of the SI
Natasha, the possibility of a hard-takeoff AI operating largely independent
of humans is a real possibility, not single-track thinking.
If someone created a smarter-than-human AI, it might be easier for the AI to
initially improve upon itself recursively, rather than enhance the
intelligence of other humans. This would be for various reasons: it would
have complete access to its own source code, its cognitive elements would be
operating much faster, it could extend itself onto adjacent hardware, etc.
Even a friendly superintelligent AI might decide that it's easiest to help
humans by improving itself quickly, then actually offering help only after
it has reached an extremely high level of capability.
Are you familiar with the general arguments for hard takeoff AI? Quite a
few are found here: http://www.singinst.org/upload/LOGI//seedAI.html. If
you assign a hard takeoff a very low probability, then I would at least
figure that you've read the arguments in favor of the possibility and have
refutations of them.
> My earnest approach to the future is the combined effort of the human brain
> and technological innovation in enhancing human to merge with SI through
> stages of development and not one big event that occurs overnight.
This may be your preference, but it may turn out to simply be
technologically easier to create a self-improving AI first. Sort of like
how I might like to say that I'd prefer for fossil fuels to be replaced by
solar power, but replacing them with nuclear seems far simpler
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat