[ExI] singularity summit on foxnews

Samantha Atkins sjatkins at mac.com
Fri Sep 14 03:38:57 UTC 2007

Richard Loosemore wrote:
> Samantha Atkins wrote:
>> Brent,
>> Calling those that point out the dangers of AGI "fear mongers" 
>> immediately denigrates their position.  That is not a reasonable 
>> beginning to eliciting thoughtful discussion.   Please start again with 
>> a less biased attitude.
>> - samantha
> No, what he said was not "biased", because these people that he 
> described as fear mongers are not neutrally and with due diligence 
> "pointing out the dangers of AI' as you choose (with some bias, maybe?) 
> to phrase it, they are exaggerating the dangers in an (apparent) attempt 
> to attract attention to themselves.
As I know many of these folks and just how diligent they in fact are the 
above seems rather incredible.  When we are talking about that which 
likely makes humanity obsolete or at least not the top dog intelligence 
wise in the local puddle I would think a fair amount of caution would be 
the default sensible position.
> As you know, I point out to the SIAI-related community that there were 
> other possible approaches to the whole enterprise that is Artificial 
> Intelligence research, and that some of those other approaches hold out 
> the possibility of build extremely stable systems that do not suffer 
> from the many problems they so loudly complain about.
Please refresh my memory as I seem to have forgotten the nature of these 
systems that were both adequately powerful and most likely safe.  I am 
very interested in such possibilities.

> What was their response to this suggestion?  Were they eager to 
> investigate this possibility?  Did they devote some time to having a 
> public debate about the issues?  Did they welcome the possibility that 
> things might not be as dire as they thought?
I remember there was a lot of bad feeling between yourself, Eliezer and 
some others.  I remember a fair amount of mud-slinging on all sides.  
The general taste of it was to wonder if the AGIs replacing such crazed 
semi-intelligent monkeys as ourselves would be such a bad thing after all. 

> No.  What they did was mount a childish series of venomous, ad hominem 
> attacks against both the ideas and the person who suggested them.  These 
> attacks were led by Eliezer Yudkowsky himself.  They did not have the 
> slightest interest in the issue itself.
I think that is a bit overstated and one-sided.  But this is not an 
invitation to prove your point with a rehash.

- samantha

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20070913/e276d525/attachment.html>

More information about the extropy-chat mailing list