[ExI] The AGI and limiting it
Tom Nowell
nebathenemi at yahoo.co.uk
Wed Mar 5 08:27:39 UTC 2008
The projected scenarios for the AGI quickly taking
over the world rely on people giving their new
creation access to production tools that it then
subverts. Other doomsday scenarios involve the
terminator-like idea of your AI hacking into military
computers and starting world war III.
These assume that whoever's invested the time and
effort into making an amazing AI then decides to let
it have free access to the outside world. This may not
be entirely likely. Letting your AI have access to
assemblers that would allow it to make nanotech
replicators would be foolish unless the nanotech could
itself be contained. After all, what if the AI wasn't
as smart as you thought and created some really bad
nanotech by accident? Your AI should be treated with
the same oversight as your human research staff.
Likewise, for allowing your AI to access the net - if
the AI decides it wants to propagate to improve its
own chance of survival (a digital "selfish gene"
concept), if you let it have easy upload access it
could spread itself all over the place, and bang - all
your research is in other people's hands, because your
AI decided it wanted to try "living" in their
computers instead.
I think sheer fear of losing your research or being
sued into oblivion from accidents generated by your AI
will constrain most researchers from letting their AI
have too much access to the non-virtual world. That
said, a clandestine military programme could create an
AI and then let it do things with insufficient
oversight - but as we've seen from the Russian
example, human researchers under such circumstances
allowed a smallpox strain to kill people after it had
been eradicated in the wild. We are *always* at a
*small* risk from governments placing their
"strategic" goals over common human survival. Over the
coming 100 years, that small risk cumulatively builds
to a moderate size one, alongside the many other
existential risks to humanity.
I believe spreading humanity off-planet to save
ourselves from ourselves is a wise insurance policy,
and the AGI is one risk among many.
Tom
___________________________________________________________
Yahoo! Answers - Got a question? Someone out there knows the answer. Try it
now.
http://uk.answers.yahoo.com/
More information about the extropy-chat
mailing list