bill-99002c-sm2

Blocking vs. Banning on Social Networks

by Hogeye Bill


Jan 5, 2019

Blocking vs. Banning on Social Networks

Banning is not aggression; not a free speech issue

Banning someone from one’s own property is not aggression, and well within the rights of any legitimate owner. Thus, Facebook and Twitter and Patreon have the right to ban anyone they wish. Here I intend to examine the meta-issue: What policies should social network firms adopt to have the best quality network? But that’s the question from a social network shareholder and CEO point of view. From a customer-user’s perspective, the question is: What type of social network should I use? Which ones should I join and patronize, and which (as the case may be) should I boycott?

Open speech or filtered speech; “best of all possible worlds”

The answer to the best quality social network question depends on what one wants out of a social network. Do you want open and candid speech, or do you prefer a narrower range of speech, filtering out things such as overt racism and hate speech? Lets call this open speech versus filtered speech. (Again, we emphasize that we are not discussing “free speech” here.)

There are three major ways to filter messages and posts, 1) by redacting words and phrases, 2) by deleting whole messages, and 3) by deleting users. This final, most extreme method is what we refer to as “blocking.” Blocking is a filter which rejects all posts from users who, in their messages, have used some elements of a set F of forbidden words and ideas. The normal one-on-one individual blocking is a special case of this; presumably the blocked user offended the blocker somehow. Even though a block could be one-way, an “ignore” feature, I assume here that blocks are two-way and symmetric; the blocker cannot see the blockee's messages, and neither can the blockee see the blocker’s messages[1]

Let us now digress to a fanciful thought experiment. Imagine the best of all possible worlds. Think of a planet, a setting, people you’d like to be with, the best norms and customs you can think of - your own private utopia. But there’s a catch! All other people in your ‘best of all possible worlds’ have the same options as you - to dream up what they think is the best possible world. So forget being king or queen of a bunch of sex slaves. The other people would simply think of the same world minus you, and your world vanishes. Now, the question put forward by Robert Nozick was: Would there be any stable “associations” (worlds) and, if so, what would they be like?

Utopia is a meta-utopia: the environment in which Utopian experiments may be tried out; the environment in which people are free to do their own thing; the environment which must, to a great extent, be realized first if more particular Utopian visions are to be realized stably. - Robert Nozick[2]
NozickASU

Robert Nozick only envisioned what we would call user blocking, but his thought experiment might guide us by an idealized model for social interaction. Is the ability for everyone to be able to block anyone enough? Or are there some social dangers so great that a central authority must step in to protect “society” from contamination by bad or harmful speech of some sort? There do currently exist many open speech forums, forums where the only content filtering is done by users. Are these the best forums? If not, why not?

Levels of Filtering

We can filter by word, message, or user. This seems to be the intuitive order of intensity of interference or intolerance. A word (or phrase) based filter replaces an offending word with e.g. dashes or with a replacement word. The Mind If I Do a J forum was an open speech forum except for one thing - it replaced “nigger” with “sockpuppet” in all messages. I asked about it, and an admin told me that they used to have a vandal who used the n-word too frequently. That was their easy fix. A message-based filter would delete the whole message from the forum, rather than “fixing” a word. The most serious response is a user block, filtering out all of the user’s messages, deplatforming[3] that user.

The merit of blocking may depend a lot on who gets to choose who gets blocked. I favor a decentralized approach to any filtering mechanism; in short: Let the user decide. This might be called the individualist or thick libertarian approach. People who value liberty and non-aggression also tend to favor tolerance and diversity of speech, and a rough and hearty free market of ideas. It is certainly possible to be a libertarian and also want to block hate speech and racial slurs from your correspondence, but most libertarians would “take the risk” of being offended for the prospect of more open and diverse encounters. Furthermore, there is evidence that most libertarians simply do not get offended or disgusted easily. [4]

Individual blocking has the advantages that 1) it respects individual autonomy, and 2) allows maximum consensual communication. Centralized banning and blocking 1) maintains central planning and control, and 2) suppresses politically incorrect or socially disapproved communication. Large social network firms that depend on advertising revenue or on government concessions and favoritism may have a major incentive in keeping centralized control. Perhaps political machinations, such as influence on coming elections and future economic policy, make such authoritarian social manipulation valuable to a large crony corporation. To an individual user, however, it looks quite different. Open speech, even if restricted to in-groups, is better than outside control, to almost everybody.

We have the following levels of intensity for filtering.

  1. Open Speech - no filtering at all
  2. User-Filtered Speech
    1. by word/phrase - redacted or replaced words
    2. by message - offending messages hidden/deleted
    3. by user - offensive users blocked
  3. Authority-Filtered Speech
    1. by word/phrase - redacted or replaced words
    2. by message - offending messages hidden/deleted
    3. by user - offensive users blocked

My personal position is that open speech is generally best. Filtered speech by users is okay for restrained use, but centralized filtering is something I should avoid and probably boycott. Speech filtered by individual users IIc is admittedly useful, e.g. for people who are unable to make cogent arguments, and especially those who prefer slur to debate. Also, a user block is perhaps the best response to a dingleberry, that is, someone who follows your posts with (often multiple) garbage posts. It is this stalking/dingleberry problem that makes one-way blocks inadequate.

Possible augmentation: Group auto-blocking

But what of my fragile friends on the progressive left, the social justice warriors and allies of victimized minorities? And what of my emotivist friends, who cringe at CSI reruns due to the extreme violence portrayed? What can progressives and emos do to protect themselves from mean, hateful people and the perverted words they spew? And what about political activists, who lead boycotts and shunnings. How does this help people who want to boycott Rosanne Barr, Home Depot, or Chic Filet? Currently, many of these people kind of like central authority, since it provides more choke points for boycott.

I candidly disclose that I prefer open speech. For those who do not, I submit to you that anything you can do by using central control, you can also do by other means. In particular, we are talking about blocking users. Yes, a central authority can do that for you. Will you always have his ear and his support? If you think so, I suggest that you are being short-sighted. Things change and shit happens. If all you have is “the master’s on my side” then you have little indeed.

What does centralized blocking accomplish that individual user blocking does not? Answer: Automatic blocking of “known” unwanted users. Centralized blocking is, at bottom, a time-saving device for users. It saves them from having to block people one at a time, just as all their friends have individually blocked them, one at a time. What a massive amount of redundant avoidable work! People in your affinity group should not have to block people individually, when they are already known bad guys.

Let me offer an alternative - call it group auto-blocking. Suppose your social network group, call it the Progressive Allies of Victims, the PAV, could vote on providing everyone in the group a filter that blocks users who have used a word in F in a post, F = {a list of dirty words and derogatory terms}.

Suppose that, by simply joining the PAV group, you are opting into the F filter. Suddenly, you do not need to see or report people who use hate terms. They are (usually) automatically filtered for you by your membership in PAV.

Possible auto-block group criteria are: no cussing, no derogatory terms, no sexual content, no violence, and no hate speech. Then there are likely political faction group filters, such as no pro Trump messages/users, no anti Trump messages/users, or even no Trump at all. One could replace “Trump” in the above with anarcho-capitalism, anarcho-communism, socialism, alt-right, antifa, or whatever.

Another consideration is whether any given filter is opt-in or opt-out. Perhaps a social network firm, or a group implementing an auto-block, could have one or more filters on as a default, but allow people to easily opt out if they choose. If all centralized control were done on an opt-out basis (which it isn’t, due to terms of service), then so long as opting out is known and easy, one might not have significant objection to it. Some groups, such as our hypothetical PAV group, probably can assume their members don’t like certain epithets. On the other hand, other groups might offer their filters on an opt-in basis, assuming that the user wants maximum openness until otherwise indicated.

What is likely to happen if group auto-blocking were to replace centralized blocking? Would social cyberspace splinter into tribalist info bubbles incapable of communication with out-groups? I don’t think so, and here’s why: Most people are not as insular, not as identitarian or tribal, as progressives on the one hand or alt-righters on the other. Most of us deal in ideas, and want to hear ideas. Most of us know that, as John Stuart Mill said, if you know only your own side of an issue, you know very little at all. Ideas matter - not merely in and out-groups. Thus, I expect to see many different online communities, with diverse filtering patterns as well as ideas. Some will be sustainable, at least for a while, and some will not.

People try out living in various communities, and they leave or slightly modify the ones they don’t like (find defective). Some communities will be abandoned, others will struggle along, others will split, others will flourish, gain members, and be duplicated elsewhere. Each community must win and hold the voluntary adherence of its members. No pattern is imposed on everyone, and the result will be one pattern if and only if everyone voluntarily chooses to live in accordance with that pattern of community. - Robert Nozick [5]

Objections and Concerns

There are several objections that can be offered to the group auto-block idea.

From the perspective of a social network service, failure to ban could repel new customers. Regular users might figure out the filtering techniques, but virtually all progressives and emotivist new users will be offended, and possibly leave before they make friends. Default opt-out filters may remedy this to a large degree.

From the perspective of a “snowflake” who feels ideas can injure, calling for authorities to step in is natural. “I might see something that oppresses me.” This victimhood culture seeks safe spaces and protection from micro-aggression and hurtful ideas. Despite their benign-sounding motivations, they are natural born authoritarians.

Activist censors will complain, “We can’t suppress enemy tribe members anymore.” Activists who have succeeded in the past in suppressing enemies, getting advertising revenue cut off, getting podcasters deplatformed, and so on, would like to keep their power of influence.

Would group auto-blocking cause excessive tribalism? Would it make identity politics even more cut-throat? Would it further divide people into information bubbles? Would it make it even harder for people from different political “tribes,” speaking different political languages, to communicate?

Advantages

There are also several advantages to individual and group filtering, over centralized control.

First of all, there is likely to be more open communication. This follows simply from not having a central authority filtering. From the perspective of users who favor open speech, this is great!

The social network service also gets an advantage: There is no (or at least less) need to conduct surveillance and oversight. Also, there is reduced liability under current law when there is no pretense at curation or oversight.

Quality of debate likely to improve, due to e.g. selective blocking of non-debaters in debate-oriented forums, group blocking of trolls, affinity groupings, and other cooperative incentives.

Posts would no longer be subject to the lowest common denominator of most fragile users. Fragile users could have their own “safe” groups, rather than appeal to authorities to restrict bigots. Fragile users would still be free to adjust their own experience, but without affecting unwilling others.

Conclusion

Social networks can be open speech forums, and some are. This arrangement is conducive to liberty and the pursuit of truth and knowledge. I hope I have persuaded those who are concerned with inappropriate speech that a central authority is unnecessary for filtering out hateful language - that group auto-blocking would work just as well, with control remaining with individuals and groups.

It goes without saying that any persons may attempt to unite kindred spirits, but, whatever their hopes and longings, none have the right to impose their vision of unity upon the rest. - Robert Nozick [6]

Shouldn’t we let human liberty play out, and see what happens? Do you trust liberty or central authority - the banter of the market or the bark of a sergeant? I think people will work it out, however you or I think it should be. What we can do is provide the conditions of liberty that allow human creativity to flourish.


Notes

  1. Under normal circumstances the blocker cannot see the blocked person’s posts, however in some services a group administrator or moderator is able to see people he blocked.
  2. Robert Nozick, Anarchy, State, and Utopia, part 3.
  3. deplatform - to cancel or disinvite someone from a communications platform.

    
Originally this meant losing an invitation to speak at an event, but online it means losing access to social networks like Facebook and Twitter, or online payment systems like Paypal and Patreon.

  4. It's Hard to Gross Out a Libertarian:
    Jonathan Haidt on How Our Tolerance for Disgust Determines Our Politics
  5. Robert Nozick, Anarchy, State, and Utopia, part 3.
  6. Robert Nozick, Anarchy, State, and Utopia, part 3.

JacobTheMouse

Feedback from Jacob Peets:

I just read your linked article, Bill Orton, and I like and agree with it much more than I expected to. I like the distinctions that make clear you're discussing the pros and cons of different discussion systems and not legal rights, and I like the proposal of group filters as opposed to a singular system-wide filter.

‪I was reading along and, reaching your discussion of the costs of centralized filters, immediately pointed out in my head that they still carry the benefit of reducing costs and saving time, there is less labor for individuals than with an individual-level block feature. The next paragraph immediately pointed this out, and I very much like it when an author discusses the counterpoints that I think of in the way you did. Then, in continuing to read, I quickly drew an analogy with a system I'd heard of called "liquid democracy," that could achieve the benefits of centralized systems without the costs you point out; users could simply delegate some ability to filter messages and/or block users to other individuals or groups, or they could choose from some sort of list of different algorithms that would do this for them. You immediately bring up this exact idea, sans terminology, I only had to keep reading to the next paragraph and my internal objections and suggestions were discussed.

‪I would add a couple ideas of my own to the discussion:

‪1) People could use machine learning and other algorithmic means to automate the process of filtering out trolls and offensive content, and then different algorithmic systems could be described to users for them to choose from. People would not need to look at each one piece on content themselves, nor would they have to rely on relatively crude keyword filters, they could create more effective algorithms, and then users could choose, not just from different groups, but from different algorithms, based on knowledge of what those algorithms objectively did. That's sort of touched on in your article, but you don't quite explicitly distinguish between choosing among algorithms and choosing among groups, unless I missed it.

‪2) People want different things from different contexts, environments, and social circles. The same people could take a small amount of time for political debate, for instance, and want to use a relatively "open" or "risky" discussion system for those debates, while the rest of the time they may not want the work of worrying about people using slurs or saying offensive things. I like to think I'm willing to try and reason with those I disagree with, but I do think it's possible to spend an inordinate, and unhealthy, proportion of our time doing so, and I'm sure that I personally am especially inclined towards obsessing over discussions out of proportion to what I'm obtaining from them.

‪"Snowflakes," (and I think of myself as one, though perhaps somewhat ironically,) don't just spend all our time worrying about being offended and hiding out in safe spaces. The whole point is to *not* waste time interacting with hurtful, offensive, or abrasive individuals. While some time and energy can productively be devoted to debates, if one's entire life becomes one constant debate, it can become exhausting for many, and so it's helpful to divide up our social spaces into some where useful debate can take place between reasonable people, and others where we don't have to worry about dealing with some issues, we can just hang out and talk about our favorite fiction books or talk about some other relatively non-stressful topic.

‪Trying to filter out offensive posts and people is not necessarily a sign of authoritarianism or weakness, as your article seems to indicate, sometimes it's just because our energy and time are scarce and valuable to us, and, while under some special circumstances we may pay the cost of interacting with offensive people, in those minority of times when the benefits outweigh the costs, much of the time it's not worth it. What's more, some people have a relationship with society that places them in draining contexts relatively more often than others. Racial minorities, LGBT people, atheists, political minorities, and so forth, all tend to have a moment-by-moment barrage of offensive this-and-that from various sides, and for some to spend more time in in-groups or insulated in various ways than others doesn't necessarily demonstrate that they are more easily offended because of being more emotional or because of anything inherent in their heads, it might just mean that society happens to beat their heads and shoulders more often than society does towards other people. They can't spend a lot of time trying to tell society to lay off, but at some point it's simply more effective to create counter-cultures and withdraw and much as possible from mainstream society, only interacting with offensive people when necessary, (e.g. at work, at school, or with family members,) and otherwise retreating to "safe spaces" that they and others have built to get some rest and reprieve.

‪Anyway, as far as your article goes, my point is that you seem to divide *people* up into different groups, based on whether they are more inherently inclined to be offended or to want more insular social spheres, when I think it is often times a) dependent on the characteristics of a given society and b) dependent on time, context and circumstance. Some societies will be less awful to some people, and so those people will seek out insular spaces less often and less desperately, and sometimes the same people will split their time up among social spheres and spaces, using more or less insular or filtered communities for different purposes, so that the same people may attend a discussion in a more "open" space at one point during the day, then spend time in less "open" spaces at other times of the day. I'm sure that some difference lies in individuals so that some are more disposed towards "open" spaces than others, but when you say:

‪"Most people are not as insular, not as identitarian or tribal, as progressives on the one hand or alt-righters on the other. Most of us deal in ideas, and want to hear ideas. Most of us know that, as John Stuart Mill said, if you know only your own side of an issue, you know very little at all. Ideas matter - not merely in and out-groups."

‪You seem to be missing the context and societal elements of the equation, and seeing only the inherent-individual-difference variable.


Hogeye Bill:

Excellent and thought-provoking response. The point about modality, that some may want to debate at some times and veg out at other times is very good, as is the idea of choosing algorithms that one can turn on or off depending on mood and purpose. Thanks, Jacob.

Back to
Hogeye Bill's
Rants & Spiels

Next Rant