[extropy-chat] Singularitarian verses singularity

BillK pharos at gmail.com
Fri Dec 23 00:00:49 UTC 2005


On 12/22/05, Brett Paatsch wrote:
> Eliezer S. Yudkowsky wrote:
>
> >>> I agree that political yammering is a failure mode, which is why
> >>> the SL4 list bans political discussion.
> >>
> >> To ban something is an intensely political act.
> >
> > Well, duh.  I'm not saying it's bad to commit acts that someone might
> > view as intensely political.
>
> What you *did* say is on record and is at the top of this post. That
> *you* agree and that *that* is why.
>
> You have form on this issue. You have tried to have political issues
> banned on this list before.
>
> >> To ban political yammering there has to be someone who decides
> >> what is and is not political yammering.
> >
> > The SL4 List Snipers are myself, Mitch Howe, and J. Andrew Rogers.
> >
> >> How do you conclude that *you* are especially qualified to decide
> >> what is political yammering?
> >
> > Anyone with common sense can do the job.  We don't try to
> > discriminate between good political posts and bad political posts, we
> > just ban it all.  That's not what the SL4 list is for.
>
> And how are we to suppose a work in progress such as yourself
> decides who has common sense I wonder?  Pre-judice maybe?
>
<snip>


Brett, you have started talking nonsense now. You must have got angry
at Eliezer.

Every email list has a list of suitable subjects for discussion.
If the list is for vintage motorcycles discussions, that is what you
expect most of the members to discuss. The moderators may allow some
associated topics, but eventually will terminate really off-topic
discussions.
That's not political. That's the basics of how email lists work.

If you want to discuss Disney cartoons of the 1940s you don't jump
into a list about quantum physics and start posting. The moderators
will soon stop you.

The SL4 list objectives are:
The SL4 mailing list is a refuge for discussion of advanced topics in
transhumanism and the Singularity, including but not limited to topics
such as Friendly AI, strategies for handling the emergence of
ultra-powerful technologies, handling existential risks (planetary
risks), strategies to accelerate the Singularity or protect its
integrity, avoiding the military use of nanotechnology and grey goo
accidents, methods of human intelligence enhancement, self-improving
Artificial Intelligence, contemporary AI projects that are explicitly
trying for genuine Artificial Intelligence or even a Singularity,
rapid Singularities versus slow Singularities, Singularitarian
activism, and more.

So that is what you are expected to discuss there.

End of story.

BillK



More information about the extropy-chat mailing list