Blocking vs. Banning on Social Networks

by Hogeye Bill

Jan 5, 2019

Blocking vs. Banning on Social Networks

Banning is not aggression; not a free speech issue

Banning someone from one’s own property is not aggression, and well within the rights of any legitimate owner. Thus, Facebook and Twitter and Patreon have the right to ban anyone they wish. Here I intend to examine the meta-issue: What policies should social network firms adopt to have the best quality network? But that’s the question from a social network shareholder and CEO point of view. From a customer-user’s perspective, the question is: What type of social network should I use? Which ones should I join and patronize, and which (as the case may be) should I boycott?

Open speech or filtered speech; “best of all possible worlds”

The answer to the best quality social network question depends on what one wants out of a social network. Do you want open and candid speech, or do you prefer a narrower range of speech, filtering out things such as overt racism and hate speech? Lets call this open speech versus filtered speech. (Again, we emphasize that we are not discussing “free speech” here.)

There are three major ways to filter messages and posts, 1) by redacting words and phrases, 2) by deleting whole messages, and 3) by deleting users. This final, most extreme method is what we refer to as “blocking.” Blocking is a filter which rejects all posts from users who, in their messages, have used some elements of a set F of forbidden words and ideas. The normal one-on-one individual blocking is a special case of this; presumably the blocked user offended the blocker somehow. Even though a block could be one-way, an “ignore” feature, I assume here that blocks are two-way and symmetric; the blocker cannot see the blockee's messages, and neither can the blockee see the blocker’s messages[1]

Let us now digress to a fanciful thought experiment. Imagine the best of all possible worlds. Think of a planet, a setting, people you’d like to be with, the best norms and customs you can think of - your own private utopia. But there’s a catch! All other people in your ‘best of all possible worlds’ have the same options as you - to dream up what they think is the best possible world. So forget being king or queen of a bunch of sex slaves. The other people would simply think of the same world minus you, and your world vanishes. Now, the question put forward by Robert Nozick was: Would there be any stable “associations” (worlds) and, if so, what would they be like?

Utopia is a meta-utopia: the environment in which Utopian experiments may be tried out; the environment in which people are free to do their own thing; the environment which must, to a great extent, be realized first if more particular Utopian visions are to be realized stably. - Robert Nozick[2]

Robert Nozick only envisioned what we would call user blocking, but his thought experiment might guide us by an idealized model for social interaction. Is the ability for everyone to be able to block anyone enough? Or are there some social dangers so great that a central authority must step in to protect “society” from contamination by bad or harmful speech of some sort? There do currently exist many open speech forums, forums where the only content filtering is done by users. Are these the best forums? If not, why not?

Levels of Filtering

We can filter by word, message, or user. This seems to be the intuitive order of intensity of interference or intolerance. A word (or phrase) based filter replaces an offending word with e.g. dashes or with a replacement word. The Mind If I Do a J forum was an open speech forum except for one thing - it replaced “nigger” with “sockpuppet” in all messages. I asked about it, and an admin told me that they used to have a vandal who used the n-word too frequently. That was their easy fix. A message-based filter would delete the whole message from the forum, rather than “fixing” a word. The most serious response is a user block, filtering out all of the user’s messages, deplatforming[3] that user.

The merit of blocking may depend a lot on who gets to choose who gets blocked. I favor a decentralized approach to any filtering mechanism; in short: Let the user decide. This might be called the individualist or thick libertarian approach. People who value liberty and non-aggression also tend to favor tolerance and diversity of speech, and a rough and hearty free market of ideas. It is certainly possible to be a libertarian and also want to block hate speech and racial slurs from your correspondence, but most libertarians would “take the risk” of being offended for the prospect of more open and diverse encounters. Furthermore, there is evidence that most libertarians simply do not get offended or disgusted easily. [4]

Individual blocking has the advantages that 1) it respects individual autonomy, and 2) allows maximum consensual communication. Centralized banning and blocking 1) maintains central planning and control, and 2) suppresses politically incorrect or socially disapproved communication. Large social network firms that depend on advertising revenue or on government concessions and favoritism may have a major incentive in keeping centralized control. Perhaps political machinations, such as influence on coming elections and future economic policy, make such authoritarian social manipulation valuable to a large crony corporation. To an individual user, however, it looks quite different. Open speech, even if restricted to in-groups, is better than outside control, to almost everybody.

We have the following levels of intensity for filtering.

  1. Open Speech - no filtering at all
  2. User-Filtered Speech
    1. by word/phrase - redacted or replaced words
    2. by message - offending messages hidden/deleted
    3. by user - offensive users blocked
  3. Authority-Filtered Speech
    1. by word/phrase - redacted or replaced words
    2. by message - offending messages hidden/deleted
    3. by user - offensive users blocked

My personal position is that open speech is generally best. Filtered speech by users is okay for restrained use, but centralized filtering is something I should avoid and probably boycott. Speech filtered by individual users IIc is admittedly useful, e.g. for people who are unable to make cogent arguments, and especially those who prefer slur to debate. Also, a user block is perhaps the best response to a dingleberry, that is, someone who follows your posts with (often multiple) garbage posts. It is this stalking/dingleberry problem that makes one-way blocks inadequate.

Possible augmentation: Group auto-blocking

But what of my fragile friends on the progressive left, the social justice warriors and allies of victimized minorities? And what of my emotivist friends, who cringe at CSI reruns due to the extreme violence portrayed? What can progressives and emos do to protect themselves from mean, hateful people and the perverted words they spew? And what about political activists, who lead boycotts and shunnings. How does this help people who want to boycott Rosanne Barr, Home Depot, or Chic Filet? Currently, many of these people kind of like central authority, since it provides more choke points for boycott.

I candidly disclose that I prefer open speech. For those who do not, I submit to you that anything you can do by using central control, you can also do by other means. In particular, we are talking about blocking users. Yes, a central authority can do that for you. Will you always have his ear and his support? If you think so, I suggest that you are being short-sighted. Things change and shit happens. If all you have is “the master’s on my side” then you have little indeed.

What does centralized blocking accomplish that individual user blocking does not? Answer: Automatic blocking of “known” unwanted users. Centralized blocking is, at bottom, a time-saving device for users. It saves them from having to block people one at a time, just as all their friends have individually blocked them, one at a time. What a massive amount of redundant avoidable work! People in your affinity group should not have to block people individually, when they are already known bad guys.

Let me offer an alternative - call it group auto-blocking. Suppose your social network group, call it the Progressive Allies of Victims, the PAV, could vote on providing everyone in the group a filter that blocks users who have used a word in F in a post, F = {a list of dirty words and derogatory terms}.

Suppose that, by simply joining the PAV group, you are opting into the F filter. Suddenly, you do not need to see or report people who use hate terms. They are (usually) automatically filtered for you by your membership in PAV.

Possible auto-block group criteria are: no cussing, no derogatory terms, no sexual content, no violence, and no hate speech. Then there are likely political faction group filters, such as no pro Trump messages/users, no anti Trump messages/users, or even no Trump at all. One could replace “Trump” in the above with anarcho-capitalism, anarcho-communism, socialism, alt-right, antifa, or whatever.

Another consideration is whether any given filter is opt-in or opt-out. Perhaps a social network firm, or a group implementing an auto-block, could have one or more filters on as a default, but allow people to easily opt out if they choose. If all centralized control were done on an opt-out basis (which it isn’t, due to terms of service), then so long as opting out is known and easy, one might not have significant objection to it. Some groups, such as our hypothetical PAV group, probably can assume their members don’t like certain epithets. On the other hand, other groups might offer their filters on an opt-in basis, assuming that the user wants maximum openness until otherwise indicated.

What is likely to happen if group auto-blocking were to replace centralized blocking? Would social cyberspace splinter into tribalist info bubbles incapable of communication with out-groups? I don’t think so, and here’s why: Most people are not as insular, not as identitarian or tribal, as progressives on the one hand or alt-righters on the other. Most of us deal in ideas, and want to hear ideas. Most of us know that, as John Stuart Mill said, if you know only your own side of an issue, you know very little at all. Ideas matter - not merely in and out-groups. Thus, I expect to see many different online communities, with diverse filtering patterns as well as ideas. Some will be sustainable, at least for a while, and some will not.

People try out living in various communities, and they leave or slightly modify the ones they don’t like (find defective). Some communities will be abandoned, others will struggle along, others will split, others will flourish, gain members, and be duplicated elsewhere. Each community must win and hold the voluntary adherence of its members. No pattern is imposed on everyone, and the result will be one pattern if and only if everyone voluntarily chooses to live in accordance with that pattern of community. - Robert Nozick [5]

Objections and Concerns

There are several objections that can be offered to the group auto-block idea.

From the perspective of a social network service, failure to ban could repel new customers. Regular users might figure out the filtering techniques, but virtually all progressives and emotivist new users will be offended, and possibly leave before they make friends. Default opt-out filters may remedy this to a large degree.

From the perspective of a “snowflake” who feels ideas can injure, calling for authorities to step in is natural. “I might see something that oppresses me.” This victimhood culture seeks safe spaces and protection from micro-aggression and hurtful ideas. Despite their benign-sounding motivations, they are natural born authoritarians.

Activist censors will complain, “We can’t suppress enemy tribe members anymore.” Activists who have succeeded in the past in suppressing enemies, getting advertising revenue cut off, getting podcasters deplatformed, and so on, would like to keep their power of influence.

Would group auto-blocking cause excessive tribalism? Would it make identity politics even more cut-throat? Would it further divide people into information bubbles? Would it make it even harder for people from different political “tribes,” speaking different political languages, to communicate?


There are also several advantages to individual and group filtering, over centralized control.

First of all, there is likely to be more open communication. This follows simply from not having a central authority filtering. From the perspective of users who favor open speech, this is great!

The social network service also gets an advantage: There is no (or at least less) need to conduct surveillance and oversight. Also, there is reduced liability under current law when there is no pretense at curation or oversight.

Quality of debate likely to improve, due to e.g. selective blocking of non-debaters in debate-oriented forums, group blocking of trolls, affinity groupings, and other cooperative incentives.

Posts would no longer be subject to the lowest common denominator of most fragile users. Fragile users could have their own “safe” groups, rather than appeal to authorities to restrict bigots. Fragile users would still be free to adjust their own experience, but without affecting unwilling others.


Social networks can be open speech forums, and some are. This arrangement is conducive to liberty and the pursuit of truth and knowledge. I hope I have persuaded those who are concerned with inappropriate speech that a central authority is unnecessary for filtering out hateful language - that group auto-blocking would work just as well, with control remaining with individuals and groups.

It goes without saying that any persons may attempt to unite kindred spirits, but, whatever their hopes and longings, none have the right to impose their vision of unity upon the rest. - Robert Nozick [6]

Shouldn’t we let human liberty play out, and see what happens? Do you trust liberty or central authority - the banter of the market or the bark of a sergeant? I think people will work it out, however you or I think it should be. What we can do is provide the conditions of liberty that allow human creativity to flourish.


  1. Under normal circumstances the blocker cannot see the blocked person’s posts, however in some services a group administrator or moderator is able to see people he blocked.
  2. Robert Nozick, Anarchy, State, and Utopia, part 3.
  3. deplatform - to cancel or disinvite someone from a communications platform.

Originally this meant losing an invitation to speak at an event, but online it means losing access to social networks like Facebook and Twitter, or online payment systems like Paypal and Patreon.

  4. It's Hard to Gross Out a Libertarian:
    Jonathan Haidt on How Our Tolerance for Disgust Determines Our Politics
  5. Robert Nozick, Anarchy, State, and Utopia, part 3.
  6. Robert Nozick, Anarchy, State, and Utopia, part 3.
Back to
Rants & Spiels