[ExI] The AGI and limiting it

Richard Loosemore rpwl at lightlink.com
Thu Mar 6 15:18:49 UTC 2008


Lee Corbin wrote:
> Tom Nowell writes
> 
>> The projected scenarios for the AGI quickly taking
>> over the world rely on people giving their new
>> creation access to production tools that it then
>> subverts.
> 
> No, the scenario that scares the pants off people is that
> the AGI will become vastly, vastly more intelligent than
> people over some very short period of time by 
> recursive self-improvement. 
> 
> I would suggest that you read Eliezer Yudkowski's
> writings at Singinst.org, e.g. "Staring into the
> Singularity" http://yudkowsky.net/singularity.html
> 
> (My apologies if you know all about it---I'm sorry,
> it just didn't sound as though you were aware of
> the primary threat.)
> 
> For years, Eliezer and others on the SL4 list pondered
> the question, "How can you keep the AI in a box?".
> Believe it or not, it will in almost every case be simply
> able to talk its way out, much in the way that you
> could convince a four year old to hand you a certain
> key."  I know this seems difficult to believe, but that
> is what people have concluded who've thought about
> this for years and years and years.
> 
> Again, my sincere apologies if this is all old to you.
> But the reason that Robert Bradbury, John Clark,
> and people right here consider that we are probably
> doomed is because you can't control something that
> is far more intelligent than you are.

The analysis of AGI safety given by Eliezer is weak to the point of 
uselessness, because it makes a number of assumptions about the 
architecture of AGI systems that are not supported by evidence or argument.

Your comment "I know this seems difficult to believe, but that is what 
people have concluded who've thought about this for years and years and 
years" makes me smile.

Some of those people who have thought about it for years and years and 
years were invited to discuss these issues in greater depth, and examine 
the disputed assumptions.  The result?  They mounted a vitriolic 
campaign of personal abuse against those who wanted to suggest that 
Eliezer might not be right, and banned them from the SL4 mailing list.

You will find that a much broader and more vigorous discussion of AI 
safety issues has been taking place on the AGI mailing list for some 
time now.



Richard Loosemore




More information about the extropy-chat mailing list