[ExI] The AGI and limiting it

Lee Corbin lcorbin at rawbw.com
Thu Mar 6 17:06:53 UTC 2008


Richard writes

> Lee Corbin wrote:
> 
>> Again, my sincere apologies if this is all old to you
>> [those who appeared to be new to the subject]
>> But the reason that Robert Bradbury, John Clark,
>> and people right here consider that we are probably
>> doomed is because you can't control something that
>> is far more intelligent than you are.
> 
> The analysis of AGI safety given by Eliezer is weak to the point of 
> uselessness, because it makes a number of assumptions about the 
> architecture of AGI systems that are not supported by evidence
> or argument.

Sorry, but quite a number of us have found those arguments to be
very convincing, though, of course by no means the final word.

> Your comment "I know this seems difficult to believe, but that is what 
> people have concluded who've thought about this for years and years and 
> years" makes me smile.

Yes, I should have acknowledged the existence of the dissenting views
(which have traditionally received little support here on the Extropian
list).

> Some of those people who have thought about it for years and years and 
> years were invited to discuss these issues in greater depth, and examine 
> the disputed assumptions.  The result?  They mounted a vitriolic 
> campaign of personal abuse against those who wanted to suggest that 
> Eliezer might not be right, and banned them from the SL4 mailing list.

I.e., you got banned. How many other people were banned from that list
simply because they disagreed with the majority? 

> You will find that a much broader and more vigorous discussion of AI 
> safety issues has been taking place on the AGI mailing list for some 
> time now.

Thanks for the information. You probably should provide a link.

Lee




More information about the extropy-chat mailing list