[ExI] The AGI and limiting it

Lee Corbin lcorbin at rawbw.com
Thu Mar 6 04:02:57 UTC 2008


Tom Nowell writes

> The projected scenarios for the AGI quickly taking
> over the world rely on people giving their new
> creation access to production tools that it then
> subverts.

No, the scenario that scares the pants off people is that
the AGI will become vastly, vastly more intelligent than
people over some very short period of time by 
recursive self-improvement. 

I would suggest that you read Eliezer Yudkowski's
writings at Singinst.org, e.g. "Staring into the
Singularity" http://yudkowsky.net/singularity.html

(My apologies if you know all about it---I'm sorry,
it just didn't sound as though you were aware of
the primary threat.)

For years, Eliezer and others on the SL4 list pondered
the question, "How can you keep the AI in a box?".
Believe it or not, it will in almost every case be simply
able to talk its way out, much in the way that you
could convince a four year old to hand you a certain
key."  I know this seems difficult to believe, but that
is what people have concluded who've thought about
this for years and years and years.

Again, my sincere apologies if this is all old to you.
But the reason that Robert Bradbury, John Clark,
and people right here consider that we are probably
doomed is because you can't control something that
is far more intelligent than you are.

Lee

> These assume that whoever's invested the time and
> effort into making an amazing AI then decides to let
> it have free access to the outside world. This may not
> be entirely likely.



More information about the extropy-chat mailing list