Censorship Is A Tool

Background

This blog post, like the one on Tusky rickrolling Gab users, stems from many conversations I’ve had on the Fediverse. Rather than repeating myself in tiny 512-character toots, it’s about time for a full-length blog post I can just point people to.

I began to sit down to write this in July of 2019. The Fediverse movement to recruit instances to defederate with Gab and Spinster was in full swing. Along with it came many discussions on the topic of censorship. These two sites in particular are at best a haven for the kinds of people that have the intelligence for developing coherent arguments, but are lacking a degree of self-criticality that demonstrates earnest intellectual honesty. At worst they are a cesspool of people shouting slurs and advancing actively harmful ideologies. And the people in the former category are deliberately OK hanging out with the latter, unaware or uncaring that the company they keep reflects on themselves.

Now, that has died down. I think it is possible to write something productive with a bit of a more level head.

The following lengthy post outlines a way to view censorship on the Fediverse. It hopefully provides insights as to why I am OK with defederation: to build communities.

Censorship On The Internet

Currently, the idea in vogue is that the modern internet’s platforms are conducting censorship. Depending who one asks there are different claims as to the consequences of this censorship. Some examples are “echo chambers” or “suppression of ideologies”. The predicate is such a broad and vague statement that it is hard to digest and fully appreciate any nuance, and the claims are equally so broad that it is hard to have a meaningful conversation. If this level of discussion is as deep as it goes, then it is nothing more than asking someone “Do you think like me?” in order to put them into a friend-or-foe camp.

Crafting Definitions

I’d like to provide an alternative view, using the term speech to really mean “someone expressing themselves in any form and through any medium, be it written or spoken or painted or danced or sung, etc”.

I view an act of censorship as a discrete action done by an individual or corporation, on someone’s speech. However, that’s not all! I also view the act of censorship as including/excluding all or a subset of speech due to lack of space or other physical factors. For example, a newspaper having only so many physical inches to print opinion articles precludes its ability to publish 10,000 people’s five-page in depth opinions. I like to think of this idea as being analogous to the economics idea of opportunity cost. Including some speech means that the mere fact of including it has to exclude others’ speech because they are not able to take up that same physical space or time. For succinctness in the rest of this post, I’m going to call the discrete-action censorship deliberate censorship, and the “opportunity cost” censorship incidental censorship. However, do note that in incidental censorship, someone is still having to decide how to physically fill in a unit of time and space. However, making the judgement as to whether this then becomes deliberate censorship relies on knowing the motivations of the censoring party. I find this kind of speculation distasteful to include in a definition, but fear not! This scenario will be revisited later.

I’d like to think of these two as small, but pretty generous, definitions. People can disagree with this framing, but still find it or the following examination useful.

Big Tech Algos

Popular internet platforms such as Twitter, Facebook, Reddit, and YouTube have statistical or algorithmic methods of listing to individual users specific pieces of content. For example, Reddit used to use the Wilson Score confidence interval to determine the position of items based on up/down votes. It isn’t an algorithm but a statistics metric. Thus, it is easy to say explain the incidental censorship that this statistic does, independently of the bias of the statistician themselves. Unfortunately, the Wilson Score is incredibly problematic for a way to show content because it stems from a branch of math dedicated to an abstract mathematical model called a Bernoulli trial, which is a nice way of saying “it has a lot of assumptions”. And the real-world of “people visiting web pages” certainly violates these assumptions. So it’s no surprise that all these sites instead turn to algorithmic models, which try to maximize something in a messy world where you cannot assume too much. Unfortunately, when switching to very complex algorithms, it becomes harder to explain the incidental censorship, and distrustful outsiders can try to instead claim any algorithmic behavior is really deliberate censorship reflective of its creators.

Let’s start with the incidental.

When the faceless corporation behind these products makes sweeping decisions such as “timelines are not chronological”, “we won’t show all content produced by people users have explicitly decided to follow/friend/subscribe-to”, or “ad content will be injected with organic content”, they’re exercising incidental censorship. A chronological post could have been an ad instead, or an older-but-popular item. They could have done it to just me, and left y’alls accounts alone. But that’s not what technology is about. Technology is about easily scaling up, and so they apply this form of censorship to every user on a large scale. This censorship action starts to migrate away from being an isolated incident into a systemic feature. By choosing to display certain content, others are simply not shown to that user. Repeat for everyone.

It is precisely because of this very specific ability to scale up the incidental censorship, that I want to revisit the idea that even in incidental censorship, someone may be deliberately choosing an outcome. That is, people get upset because they believe a deliberate censorship is occurring when it is being presented as an incidental one. For example, “our algorithms aren’t biased against bigfoot theories” does nothing to calm bigfoot believers who notice their content is continuously incidentally censored at a very large scale. This, plus the shift to harder-to-explain algorithms, accounts for the rise of folks that believe the incidental censorship is really deliberate. The disagreement about whether something is incidental or deliberate drives a lot of the vehement disagreement online. This is important to remember later, when I discuss re-applying this examination to the Fediverse.

Switching gears to deliberate censorship. When these faceless corporations boot specific users off the platform, or take own specific content, then they are denying the entire platform to content or people in a form of deliberate censorship. If they start applying this to more than one user, or piece of content, then a systemic pattern could emerge. For example, illegal content is often deliberately censored. That’s an easy pattern to spot. Since legality and morality are not the same (but are often mistakenly conflated together), sometimes the systemic pattern goes beyond legality to content that’s simply undesirable-to-the-platform-owner. Furthermore, if people believe a particular platform has a key place in society, as if a part of the public commons, then isolated acts of censorship become even more scrutinized. Thus, where the message is relayed through on the internet can matter to some.

Whew! This is a lot! To summarize with big key takeaways:

Why Do Folks Censor, Anyway?

Before we examine the Fediverse specifically, we need to understand why people use these two tools: incidental and deliberate censorship. I’ll do my best to go over some categories, but this list is not meant to be exhaustive. I’m purposefully going to go over mundane examples, because I think it is really important to understand that humans apply censorship as a tool in our every day lives in really boring and mundane ways. I am not trying to strawman a particular viewpoint. I am trying to examine low-stake scenarios and be comfortable with them so I can develop a view that, when the moral stakes are high, acknowledges nuance and concedes a degree of reasonableness without sounding like an absurd extremist.

Curation

Some people want to curate specific content. Perhaps they want to save and share a list of recipes, with the goal to use it for cooking. By building a list solely of recipes, a person is deliberately excluding all sorts of other interesting works (depending on the medium of their list – paper, digital, etc): paintings, mathematical papers, philosophical treatises, hate speech, videos containing upsetting scenes. Additionally, a person could be incidentally censoring items. Perhaps their piece of paper is so small they can only fit two recipes on it. So long, third recipe!

Wrong Time

Here in Switzerland, noise is a huge piece of the culture. It is even codified in certain zoning laws. At 07:59, the whole country is quiet. At 08:00 (except Sundays), the country sings with the sound of lawnmowers, jackhammers, leafblowers, and chainsaws. Society has decided to incidentally fill the early morning time with tranquility. I’ve heard reports of deliberate censorship too, such as people yelling out of their window if someone had the audacity to fire up a lawnmower a few minutes too soon.

Inappropriate

There are certain cultural actions and forms of expressiveness that are simply inappropriate. For example, no one with a shred of decency would try to hit on a grieving widow(er) at their spouse’s funeral, nor would try to roast the deceased in an act of poor comedic taste. I am sure a very small number of people have deliberately censored themselves from making that choice, but most others incidentally censor themselves by actually choosing to participate in the funeral by grieving alongside everyone else.

Spam

I hope I don’t need to elaborate on this one. Spam is a form of speech that simply uses sheer repetition of a single message as its form of expression.

Conclusion

What I want to demonstrate with these cherrypicked examples is simple: People living out their lives in a community deliberately and incidentally censor all the time. This is useful because we can say that the question “is censorship occurring?” is no longer useful. “Yes”! Now we can pivot to a more useful and nuanced question of “what censorship is occuring?” when looking at the Fediverse.

Censorship & The Fediverse

The Fediverse is not a censorship-resistant network.

I say this because projects exist that bill this as their goal. For example, Freenet has existed for nearly two decades (yes, twenty years!) as an explicitly censorship-resistant networking protocol. Another example: running your own HTTP(S) server has long been viewed as being censorship resistant. Finally, and I’m admittedly unfamiliar with this specific solution, Tor Onion Services are also geared for anonymity in addition to censorship-resistance.

ActivityPub, on the other hand, is a protocol birthed from OStatus and the RDF community. While I am sure the folks that wrote the ActivityPub protocol care about censorship to some degree, it wasn’t an explicit design goal of the protocol. Likewise, Fediverse applications like Mastodon do not bill themselves as the “censorship-resistant” network.

What is a “censorship-resistant” network anyway? It is one where deliberate censorship cannot be done fundamentally at the network level. Neither can deliberate censorship masquerading as incidental censorship. However, because the computer can only show so many pixels on the screen, and a human only has so many hours in a day, every form of censorship-resistant network will still have incidental censorship.

Thus, when I see people people claim that the Fediverse is a censorship-resistant platform, they are doing everyone a disservice by overloading meanings. They really mean a weaker form of “censorship resistance”: the tech companies of Twitter, Facebook, etc are not the ones doing incidental and deliberate censorship. For some that’s enough. For others who were expecting more, I’m sure they feel let down and feel as if ActivityPub is flawed for not meeting their expectations. Administrators and moderators of instances are still the ones holding the censorship keys and applying incidental and deliberate censorship. I find this reasonable; to some even this is too much. They were probably the ones promised the stronger form of censorship-resistance and were disappointed with getting the ActivityPub version. It sucks, but I’ve seen their disappointment cross over into uncalled-for territory of outrage and harassment.

Now, everything comes together: if the Fediverse isn’t about censorship-resistance, but about freeing one’s self from major platforms, how does defederating fit in? Well, the Fediverse is about building a community for one’s self. And if one is an administrator, then for building a community for one’s peers. Some people prefer building their instance as the community they are building, others prefer seeing multiple instances as the community they are helping build. This is a big difference in viewpoint! Instance-centric admins will defederate and censor other instances in order to protect their members, curate the content they want, ensure it is appropriate, and generally have a productive outcome – productive being their in-group definition, not yours and not mine. We can’t stop them from building the community they want, and neither can Twitter, Facebook, etc!

This is actually a form of empowerment for that community: since I cannot stop someone from building the community they want, I cannot censor them. This is the weak censorship-resistance in action. They just so happen to also be using censorship as a tool to build their community. I find this kind of paradox-like duality fascinating.

On the other hand, members of the Fediverse that view the community they are building as cross-instance look to defederation with chagrin – it can fragment their ideal community and identity. This is real and I want to acknowledge it, despite having no comforting words nor a good solution for it. There are technical workarounds, but that doesn’t make the person feel better.

And finally, there’s a bucket of folks that look at that grassroots movement to defederate Gab and Spinster as an affront to the Federation. I instead think it is a matter of the Fediverse as a whole democratizing ideas and elements protecting itself from harm, and the ideas of those two instances happen to be pretty awful and unpopular on the whole. I can’t speak to other specific incidents where instances that have been targeted for mass-defederation, and can imagine where mob-pitchfork mentality is harmful. However, the federating platform does mean that ideas are free to duke it out with no side having an inherent advantage. That the end result is that Gab and Spinster are pretty much shunned by most Fediverse communities is, to me, indicative of just how harmful the content is: Gab radicalized the Pittsburgh synagogue shooter who killed 11 people. No surprise to me that it is unpopular. That they are shuned en-masse is also indicative of the kind of quality communities that want to live on the Fediverse: they’re already exceeding a very low bar that major companies are currently failing to meet.

The step away from centralized services like Twitter and Facebook to decentralized services is a huge one. It is not the end, but a step in the right direction (I may have more to say on that later this year). Since this step should appeal to everyone still on centralized services, I want a Fediverse where people join up in droves, feel safe in their beginning instance, and over time feel safe with the instances they federate with. Because even if an instance even federates with just one other, that is infinite times more federation than before: none. Transferring the power of incidental censorship away from corporations, even if the current solution is not perfect, is still a fundamental game changer that should be welcomed.


Created: Jul 22, 2019 16:50:19 EDT
Last Updated: Feb 13, 2020 18:44:48 EST
By: Cory Slep

Fediverse Comments