[extropy-chat] Singularitarian verses singularity

Eliezer S. Yudkowsky sentience at pobox.com
Thu Dec 22 23:58:02 UTC 2005


Brett Paatsch wrote:
> Eliezer S. Yudkowsky wrote:
> 
>>>> I agree that political yammering is a failure mode, which is
>>>> why the SL4 list bans political discussion.
>>> 
>>> To ban something is an intensely political act.
>> 
>> Well, duh.  I'm not saying it's bad to commit acts that someone
>> might view as intensely political.
> 
> What you *did* say is on record and is at the top of this post. That 
> *you* agree and that *that* is why. You have form on this issue. You
> have tried to have political issues banned on this list before.

Yes.  Why are you taking such offense at that?  Is it your opinion that 
every message on every mailing list ought for the sake of sanity to be 
utterly free?  That one may never impose censorship, of any kind?  This 
Extropians list is censored, you know.  There are people who have been 
asked not to post here.

Every scientific journal and edited volume chooses what to publish and 
what not to publish.  And this serves a function in science; it is more 
than just a convenience.  (On SL4 it is just a convenience.)

>> Anyone with common sense can do the job.  We don't try to 
>> discriminate between good political posts and bad political posts,
>> we just ban it all.  That's not what the SL4 list is for.
> 
> And how are we to suppose a work in progress such as yourself decides
> who has common sense I wonder?  Pre-judice maybe?

Mostly it's a question of who's willing to put the work into the job of 
Sniping.

>>> It seems like a "friendly" AI with *your* values could only be a
>>> benevolent dictator at best.  And benevolent not as those that
>>> are ruled by it decide but as it decides using the values built
>>> in by you.
>> 
>> Yeah, the same way an AI built by pre-Copernican scientists must 
>> forever believe that the Sun orbits the Earth.  Unless the
>> scientists understand Bayes better than they understand Newtonian
>> mechanics. AIs ain't tape recorders.
> 
> This paragraph of yours is completely irrelevant, and utterly absurd.

Perhaps it could do with explaining at greater length, I suppose. 
There's a rather complex point here, about programming questions rather 
than answers.  The essential idea is that an AI embodies a question, not 
an answer - in the example above, "How does the universe work?" not "The 
Sun orbits the Earth!"  But that is a fact-question, not a 
decision-question.  The decision-question that I currently suggest for 
Friendly AI is quite a complex computation but one that would focus on 
then-existing humans, not on the programmers, and that decision-question 
is described in the link I gave:

>> http://singinst.org/friendly/collective-volition.html
> 
> This is a link to a work in progress, Collective volition - one
> author - Eliezer Yudlowsky. How is this link anything other than an
> attempt to divert attention from your faux pas?

It's an attempt to explain why an AI does not need to be a tape recorder 
playing back the mistakes of its creators.  Into which territory you did 
tread.

> I have some very
> serious doubts about the aims of the Singularity Institute as I've
> understood them, but in all other areas of discussion you exhibit
> such good sense that I have set them aside. I cannot see how an AI
> built with your values could be friendly Eliezer.

I cannot see how an AI built with any human's fixed values could be a 
good idea.

> Nor do I see that
> you have enough common sense to know what you do not know, "all
> political yammering is a failure mode".

Let us by all means be careful: your quote is not precisely correct and 
the difference is significant.  You added the word "all".  I hold that 
there exists a failure mode which consists of political discussion.  Not 
that all political discussion everywhere is a failure mode.  "Yammering" 
I define as political discussion which is extremely unlikely to 
influence the real-world outcome.

> You just make an assumption
> and bang ahead on the basis of reckless self-belief.

Thee too has exhibited good sense, and for that reason I'll ask again 
what offends thee so.  For I do not in truth understand how I have 
managed to tick thee off.

-- 
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



More information about the extropy-chat mailing list