[extropy-chat] Singularitarian verses singularity

Brett Paatsch bpaatsch at bigpond.net.au
Thu Dec 22 23:05:35 UTC 2005


Eliezer S. Yudkowsky wrote:

>>> I agree that political yammering is a failure mode, which is why
>>> the SL4 list bans political discussion.
>> 
>> To ban something is an intensely political act.
> 
> Well, duh.  I'm not saying it's bad to commit acts that someone might 
> view as intensely political.  

What you *did* say is on record and is at the top of this post. That
*you* agree and that *that* is why. 

You have form on this issue. You have tried to have political issues
banned on this list before.  

>> To ban political yammering there has to be someone who decides
>> what is and is not political yammering.
> 
> The SL4 List Snipers are myself, Mitch Howe, and J. Andrew Rogers.
> 
>> How do you conclude that *you* are especially qualified to decide 
>> what is political yammering?
> 
> Anyone with common sense can do the job.  We don't try to
> discriminate between good political posts and bad political posts, we
> just ban it all.  That's not what the SL4 list is for.

And how are we to suppose a work in progress such as yourself 
decides who has common sense I wonder?  Pre-judice maybe?

>> It seems like a "friendly" AI with
>> *your* values could only be a benevolent dictator at best.  And
>> benevolent not as those that are ruled by it decide but as it decides
>> using the values built in by you.
> 
> Yeah, the same way an AI built by pre-Copernican scientists must
> forever believe that the Sun orbits the Earth.  Unless the scientists 
> understand Bayes better than they understand Newtonian mechanics.
>  AIs ain't tape recorders.

This paragraph of yours is completely irrelevant, and utterly absurd. 
 
> http://singinst.org/friendly/collective-volition.html

This is a link to a work in progress, Collective volition - one author - 
Eliezer Yudlowsky. 

How is this link anything other than an attempt to divert attention
from your faux pas?  

I have some very serious doubts about the aims of the Singularity
Institute as I've understood them, but in all other areas of discussion
you exhibit such good sense that I have set them aside.  

I cannot see how an AI built with your values could be friendly
Eliezer. Nor do I see that you have enough common sense to know
what you do not know, "all political yammering is a failure mode". 
You just make an assumption and bang ahead on the basis of 
reckless self-belief.


Brett Paatsch




More information about the extropy-chat mailing list