[extropy-chat] Singularitarian verses singularity

Brett Paatsch bpaatsch at bigpond.net.au
Fri Dec 23 02:48:13 UTC 2005


Eliezer S. Yudkowsky wrote:

> Brett Paatsch wrote:
>> Eliezer S. Yudkowsky wrote:
>>
>>>>> I agree that political yammering is a failure mode, which is
>>>>> why the SL4 list bans political discussion.
>>>>
>>>> To ban something is an intensely political act.
>>>
>>> Well, duh.  I'm not saying it's bad to commit acts that someone
>>> might view as intensely political.
>>
>> What you *did* say is on record and is at the top of this post. That 
>> *you* agree and that *that* is why. You have form on this issue. You
>> have tried to have political issues banned on this list before.
>
> Yes.  Why are you taking such offense at that?

You *always* offend me when you call for an end to political
discussion on this list. And sooner or latter you always do
weigh in on the side of censoring political discussion on this
list.

Your comment at the end of your post had nothing to do with
the comments you had made before it in the thread, and nothing
to do with what Stuart had said.

> Is it your opinion that every message on every mailing list ought for the 
> sake of sanity to be utterly free?

No.

> That one may never impose censorship, of any kind?

No.

>  This Extropians list is censored, you know.  There are people
> who have been asked not to post here.

Not censored, so far, of political discussion. Although you have
made it clear in the past that that is a state of affairs that you
would prefer and whenever the opportunity arises you make
your feelings known.

It is my understanding that those who have been asked not
to post as you put it (and I am aware of only one) were asked
not to because he would not stop attacking other posters as
opposed to attacking their arguments.

> Every scientific journal and edited volume chooses what to publish and 
> what not to publish.  And this serves a function in science; it is more 
> than just a convenience.  (On SL4 it is just a convenience.)
>
>>> Anyone with common sense can do the job.  We don't try to discriminate 
>>> between good political posts and bad political posts,
>>> we just ban it all.  That's not what the SL4 list is for.
>>
>> And how are we to suppose a work in progress such as yourself decides
>> who has common sense I wonder?  Pre-judice maybe?
>
> Mostly it's a question of who's willing to put the work into the job of 
> Sniping.

Who appoints the snipers? Who chooses the term "Snipers" instead of
moderators?  You?

>>>> It seems like a "friendly" AI with *your* values could only be a
>>>> benevolent dictator at best.  And benevolent not as those that
>>>> are ruled by it decide but as it decides using the values built
>>>> in by you.
>>>
>>> Yeah, the same way an AI built by pre-Copernican scientists must forever 
>>> believe that the Sun orbits the Earth.  Unless the
>>> scientists understand Bayes better than they understand Newtonian
>>> mechanics. AIs ain't tape recorders.
>>
>> This paragraph of yours is completely irrelevant, and utterly absurd.
>
> Perhaps it could do with explaining at greater length, I suppose. There's 
> a rather complex point here, about programming questions rather than 
> answers.  The essential idea is that an AI embodies a question, not an 
> answer - in the example above, "How does the universe work?" not "The Sun 
> orbits the Earth!"  But that is a fact-question, not a decision-question. 
> The decision-question that I currently suggest for Friendly AI is quite a 
> complex computation but one that would focus on then-existing humans, not 
> on the programmers, and that decision-question is described in the link I 
> gave:
>
>>> http://singinst.org/friendly/collective-volition.html
>>
>> This is a link to a work in progress, Collective volition - one
>> author - Eliezer Yudlowsky. How is this link anything other than an
>> attempt to divert attention from your faux pas?
>
> It's an attempt to explain why an AI does not need to be a tape recorder 
> playing back the mistakes of its creators.  Into which territory you did 
> tread.

I did not ask you to differentiate between a tape recorder and an AI,
you presumed to do that.  You also presumed to write duh and to
imply that I'd asked you a dumb question. There is a fine line between
justified confidence and unjustified arrogance.

I am concerned that if you pursue friendly AI with your values in
a position of responsibility as you apparently hold within the Singularity
Institute that any AI that results may be distinctly unfriendly to the likes
of people such as me who do not want to be ruled by a benevolent
dictator with an inclination to censorship and authoritarian rule.

>> I have some very
>> serious doubts about the aims of the Singularity Institute as I've
>> understood them, but in all other areas of discussion you exhibit
>> such good sense that I have set them aside. I cannot see how an AI
>> built with your values could be friendly Eliezer.
>
> I cannot see how an AI built with any human's fixed values could be a good 
> idea.
>
>> Nor do I see that
>> you have enough common sense to know what you do not know, "all
>> political yammering is a failure mode".
>
> Let us by all means be careful: your quote is not precisely correct and 
> the difference is significant.  You added the word "all".  I hold that 
> there exists a failure mode which consists of political discussion.  Not 
> that all political discussion everywhere is a failure mode.  "Yammering" I 
> define as political discussion which is extremely unlikely to influence 
> the real-world outcome.

That is extremely unlikely in your opinion. You are a very intelligent
guy and have been reading your posts to this list for years but I
do not regard you as particularly politically astute. Nor do I have
an reason for thinking that you know much about influencing significant
real world outcomes.  To the best of my knowledge the biggest job
you've had is the one you currently have.

>> You just make an assumption
>> and bang ahead on the basis of reckless self-belief.
>
> Thee too has exhibited good sense, and for that reason I'll ask again what 
> offends thee so.  For I do not in truth understand how I have managed to 
> tick thee off.

You will always tick me off when you use the affection and regard of
those here for you in such a way as to increase the chances of
censoring out political discussion from the ExI list.

You don't tick me off much more than when Eugen does it, or
Harvey its about the same. Slightly more.

But that aside, you are not just anyone. You are a person in
a position of trust and some authority at the Singularity Institute
and the propagation of your values to the extent that they are not
values I'd like to see propagated is naturally going to concern me.

At present I don't think your project is showing much sign of
progress, but if it did, that would in my opinion, be a cause
for more concern than excitement.

That is honestly how I feel. And that is how you have caused
me to feel.

Brett Paatsch 





More information about the extropy-chat mailing list