[ExI] Censorship

Anders anders at aleph.se
Fri May 27 08:10:29 UTC 2016


On 2016-05-26 21:49, William Flynn Wallace wrote:
> Why would it be ethical to have censorship in the first place? It's 
> like saying "Put an AI in charge of slavery..."
>
> Dan
>
> Well Dan I hate to tell you this, but we have censorship now in TV, 
> movies, books and maybe more.  I read recently that about 40k books 
> are published every month and some one has the say-so about its going 
> on sale somewhere (where might be determined by its rating).

No, that is not censorship. If you as a publisher tell me that you will 
not publish my book because it is crap/politically incorrect/will not 
sell/it is Friday that is your prerogative. There is no right to have 
stuff published. Censorship occurs is when a centralized power can 
decide to prevent publication because of content. (Some iffy definitions 
for post-publication action, but the core is prepublication approval).

> I can easily see an AI being used for some of the labor of digesting 
> all this material.  I also think an AI would never be in charge of 
> actual censorship, but the AI could kick out books, movies, that fudge 
> certain guidelines so that a human, or a committee, or the Supreme 
> Court could decide what to do with it.

In a sense this is happening with YouTube, where copyright infringing 
material is blocked - officially after a human has looked at what the 
algorithm found, but obviously often without any human oversight. For 
various sad, hilarious or rage-inducing examples, just search Boing 
Boing or Slashdot's archives.

>
> Now whether there should BE any kind of censorship is an entirely 
> different question, one that could be debated in this group if it 
> hasn't before (not likely).

As I have mentioned, I am starting to study information hazards ( 
http://www.nickbostrom.com/information-hazards.pdf ) Some of these may 
actually be serious enough that we rationally should want some form of 
censorship or control.

Others are not serious enough, but we may want to have systems that 
discourage them (libel law, boycotts, whatever).

But we have to be careful with that (e.g. 
http://blog.practicalethics.ox.ac.uk/2014/04/the-automated-boycott/ ). I 
recently enjoyed reading a series of case studies showing how 
information concealment played an important role in many big disasters ( 
http://aleph.se/andart2/risk/the-hazard-of-concealing-risk/ ). 
Generally, limiting information cuts out the good with the bad, and we 
are not very skilled at distinguishing them a priori. Plus, management 
requires information: if the problem is an underlying structure or 
something concrete rather than bad information per se, then the agencies 
that manage - whether institutional or the open society - need to get 
that information to do something. Far too often censorship just looks 
for surface detail.



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160527/3420b455/attachment.html>


More information about the extropy-chat mailing list