The Problem of Political Authority   —   Part I: The Illusion of Authority

6. The Psychology of Authority

6.1 The relevance of psychology

  1. The relevance of psychology
    1. Is this book dangerous?
    2. The appeal to popular opinion
  2. The Milgram experiments
    1. Setup
    2. Predictions
    3. Results
    4. The dangers of obedience
    5. The unreliability of opinions
      about authority
  3. Cognitive dissonance
  4. Social proof and status quo bias
  5. The power of political aesthetics
    1. Symbols
    2. Rituals
    3. Authoritative language
  6. Stockholm Syndrome and
    the charisma of power
    1. The phenomenon of SS
    2. Why does it occur?
    3. When does it occur?
    4. Are ordinary citizens prone?
  7. Case studies in the abuse of power
    1. My Lai revisited
    2. The Stanford Prison Experiment
    3. Lessons of the SPE
  8. Conclusion: anatomy of an illusion

In this chapter, I review some evidence from psychology and history, both about the attitudes and behavior of those who are subject to others' (alleged) authority and about the attitudes and behavior of those who are in positions of authority. These findings are fascinating in their own right. They also bear, in at least two important ways, on the skepticism about political authority defended in this book. For one thing, psychological data bear on the question of how much trust we should place in our intuitions about authority. For another, psychological data bear on the question of how desirable or harmful it may be to encourage skepticism about authority. In the present chapter, when I speak of 'authorities' and 'authority figures', I mean people and institutions that are socially recognized as having authority, whether or not they have genuine authority in a normative sense. 'Positions of authority' and 'institutions of authority' should be understood similarly.

6.1.1 Is this book dangerous?

Some defenders of authority have openly worried about the consequences that might result from anarchistic ideas. If ideas like those I advance in this book were to take hold in our society, they warn, there would be much more disobedience to the government.[1] This disobedience, in turn, might lead the state to become more violent and oppressive.[2] Or, as Plato and Hume warned, it might lead to a general collapse of social order.[3] Books such as this one, if not opposed sufficiently vigorously by other philosophers, might ultimately contribute to such outcomes. This would not bear directly on whether this book is correct in its central contentions (a correct thesis may be undesirable to advance), but it bears directly on the interesting question of whether perhaps this book is bad and ought not to have been written. I will address this concern in later sections of this chapter, after reviewing some important psychological findings.

6.1.2 The appeal to popular opinion

Other advocates of political authority suggest that anarchism should be rejected because it is simply too far out of the mainstream of political opinion. The belief in political obligations, writes George Klosko, 'is a basic feature of our political consciousness'.[4] He believes that we should accept common opinions as prima facie evidence in normative matters, particularly when philosophical opinion is divided. David Hume goes farther: 'The general opinion of mankind has some authority in all cases; but in this of morals 'tis perfectly infallible.'[5] If there is no political authority, it is natural to ask, then how have so many people come to have such a firm belief in it? Is it not more likely that I and the handful of other anarchists have made a mistake than that almost everyone else in the world has?

Ultimately, I disagree with that argument. All things considered, I think it more likely that others are mistaken than that I am. (Obviously, I would not hold a belief that I myself did not consider more likely true than false.) Nevertheless, the argument should not be rejected hastily or for the wrong reasons. To give the argument a fair treatment, I pause here to defend the appeal to popular opinion against overly facile objections.

There are some who reject appeals to popular opinion, in principle, as fallacious (supposedly the fallacy of argumentum ad populum). But exactly what is supposed to be fallacious about appealing to popular opinions? The most commonly cited illustration of the alleged error is the case of Christopher Columbus; when Columbus wanted to sail around the world, it is said, his contemporaries laughed at him because they were all convinced that the earth was flat. But Columbus turned out to be right. And this, you see, shows why it is foolish to trust in the opinions of the majority.

As a historical note, the preceding account is completely inaccurate. It was Columbus who was wrong and those who 'laughed at him' who were right in the chief point of dispute. The idea that Columbus's contemporaries thought the earth was flat is a modern myth. The ancient Greeks discovered that the earth was round, and this knowledge was never lost.[6] The actual point of dispute concerned the westward distance from Europe to Asia. Columbus thought this distance small enough to sail in the kind of ships that then existed; his contemporaries thought otherwise. They were right and he was wrong: the actual distance is about four times greater than Columbus thought it was. If not for the unexpected discovery of the Caribbean Islands, Columbus and his crew would have starved to death at sea long before they came anywhere near Asia.

But that is just a side note of historical interest. There certainly are cases in which large majorities of people hold mistaken beliefs. Indeed, the false beliefs that most modern people hold about Columbus and his contemporaries are a case in point. But what interesting conclusion follows from this observation? Let us consider three conclusions one might draw.

First, perhaps the Columbus example (or some other, more genuine example of popular error) is meant to show that the existence of a very widespread belief does not provide conclusive proof for the thing that the majority believes, since there are some cases in which such beliefs are false. This is obviously correct. It is also utterly uninteresting. A belief-forming method need not be infallible to be useful or rational. All or nearly all belief-forming methods are fallible, including sensory observation and scientific reasoning. This does not show that we ought to eschew observation, science, and almost everything else as 'fallacious'.

Second, perhaps the Columbus argument is meant to show that the existence of a very widespread belief does not provide any evidence at all for the thing that the majority believes. This conclusion is much more interesting. It is also obviously unwarranted. The existence of a single error or even many errors produced by a given information source does not show that source to be completely evidentially worthless. To argue that popular opinion is evidentially worthless, one would have to argue that popular opinion does no better than chance - in other words, that very widely held beliefs are correct no more often than propositions drawn at random out of a hat. But the latter claim is obviously false.

Consider now the range of cases in which a small minority of people disagrees with the majority. A small minority of people today thinks that the earth is flat, that the moon landings were faked, or that perpetual motion machines are possible; the majority disagrees. A few people believe themselves to be Jesus or Napoleon or a superhero, while all those around them disagree. In all of these cases, the majority is right, and the minority is wrong. Sometimes, in a science or mathematics class, all the students but one get the same answer to a particular problem. Sometimes, in a group of several people, all of whom witnessed some event, one person remembers the event differently from the others. In nearly all of these cases, again, the majority is right - the lone dissenter has miscalculated or misremembered. The explanation is a simple matter of probability: for the majority to be wrong, the same cognitive malfunction or cognitive malfunctions producing the same result must have occurred many times in different brains. For one person to be wrong, a cognitive malfunction need only have occurred once. The latter is generally more likely.

Third and finally, perhaps the Columbus argument is meant to show that an appeal to popular belief does not provide strong evidence for the thing that the majority believes. It might be argued that popular opinion, while more reliable than random guesses, is nevertheless quite unreliable.

But how are we meant to infer this? One possibility is that we are supposed to see the Christopher Columbus example and perhaps a few similar cases as constituting a large, random sample of popular beliefs in which a large percentage (100%) turn out to be false. This would provide serious evidence of unreliability. Another possibility is that we are supposed to simply recognize, on the basis of ordinary background knowledge and experience, that the Columbus example is a typical case of a dissenter against a majority view. But neither of these possibilities is easy to take seriously. The 'sample' of popular beliefs offered in this sort of argument typically contains only a few cases, and the sampling method is something closer to 'deliberate selection of cases with the desired trait' than 'random selection'. As to typicalness, isn't the lone conspiracy nut in the office who insists that '9/11 was an inside job' and that the U.S. government created AIDS a more typical example of a dissenter against a large majority opinion than Christopher Columbus? Based on everyday experience, how many '9/11 truthers' are there for every one Columbus?[7]

Once we see why popular opinion is to some degree evidentially relevant, it is difficult not to conclude that it is often highly relevant. Again, it is typically less likely that some cognitive malfunction occurs multiple times than that it occurs once. If that is so, then it is typically much less likely that a malfunction occurs a great many times than that it occurs a small number of times. (The principle is formalized in the Condorcet Jury Theorem.)[8] I will discuss possible exceptions to this rule below.

Appeal to popular opinion, then, is not in general fallacious. As a rule, very firm and widespread beliefs should not be lightly set aside. Considerable reflection is thus required before we set aside the common belief in political authority. We must carefully examine the most prominent and promising theories of the source of authority, as discussed in Chapters 2 through 5 above. We must also examine the likely sources of beliefs about political authority, as in the present chapter.

6.2 The Milgram experiments

6.2.1 Setup

Perhaps the most famous psychological study of obedience and authority is the one conducted by Stanley Milgram at Yale University in the 1960s.[9] Milgram gathered volunteers to participate, supposedly, in a study of memory. When each subject arrived at the laboratory, he was paid $4.50 (then a reasonable payment), which he was told was his to keep just for showing up. Another 'volunteer' (actually a confederate of the experimenter) was already present. The experimenter (actually a high school teacher whom Milgram had hired to play the role) informed both of them that they would participate in a study of the effects of punishment on learning. One of them would be designated as the 'teacher' and the other as the 'learner'. Through a rigged drawing, the naive subject was selected as the teacher, and the confederate as the learner.

The experimenter explained that the teacher would read pairs of words to the learner, who would attempt to remember which word was associated with which other word. The teacher would then quiz the learner. Each time the learner gave a wrong answer, the teacher was to administer an electric shock, through an impressive-looking shock generator. With each wrong answer, the shocks would increase in intensity, starting with a 15-volt shock and increasing by 15 volts each time. The experimenter gave the teacher a sample 45-volt shock to show what it was like (and to convince subjects of the authenticity of the shock generator). The learner mentioned that he had a slight heart condition and asked whether the experiment was safe. The experimenter assured him that while the shocks might be painful, they were not dangerous. The learner was strapped into a chair in another room with an electrode attached to his wrist, supposedly connected to the shock generator.

On a fixed schedule, the learner would make mistakes, leading to increasingly severe shocks. The switches on the shock generator were labeled from 15 volts all the way up to 450 volts, along with qualitative labels ranging from 'Slight Shock' up to 'Danger: Severe Shock', followed by an ominous 'XXX' under the last two switches. Each time the learner made a mistake, the teacher was supposed to use the next switch on the shock generator. At 75 volts, the learner began grunting in pain. At 120 volts, he shouted to the experimenter that the shocks were becoming painful. At 150 volts, the learner complained that his heart was bothering him and demanded to be released. Cries of this sort continued, up to an agonized scream at 270 volts. At 300 volts, the victim refused to provide any further answers to the memory test. The experimenter instructed the teacher to treat a failure to answer as a wrong answer and to continue administering shocks. The victim continued to scream and insist that he was no longer a participant, complaining about his heart again at 330 volts. After 330 volts, however, nothing further was heard from the learner. When the teacher reached the final, 450-volt switch on the shock generator, the experimenter would instruct the teacher to continue using the 450-volt switch. After the teacher had administered the 450-volt shock three times, the experiment was ended.

If at any point in this process the teacher expressed reluctance to continue, the experimenter would prod the teacher with 'Please continue.' If the subject repeatedly expressed reluctance, the experimenter would prompt him with 'The experiment requires that you continue', then 'It is absolutely essential that you continue', and finally 'You have no other choice. You must go on.' If the subject still resisted after the fourth prod, the experiment was discontinued.

6.2.2 Predictions

The learner, of course, did not truly receive the electric shocks. The real purpose was to determine how far subjects would be willing to obey the experimenter. If you are not already familiar with the experiment, it is worth taking a moment to reflect, first, on what you think would be the right way for the teacher to behave, and second, what you think most people would in fact do.

Postexperimental interviews established that subjects were convinced that the situation was what it appeared to be and that the learner was receiving extremely painful electric shocks. Given this, a teacher clearly ought not to continue administering shocks after the learner demands to be released. To do so would have been a serious violation of the victim's human rights. At some point, the experiment would have amounted to torture and then murder. While the experimenter has some right to direct the conduct of his experiment, no one would say he has the right to order torture and murder.

What would you have done if you had been a subject in the experiment? Milgram described the experiment to students, psychiatrists, and ordinary adults and asked them to predict both how they themselves would behave if they were in the experiment and how most other people would behave.[10] Of 110 respondents, every one said that they would defy the experimenter at some point, explaining their reasons in terms of compassion, empathy, and principles of justice. Most thought they would refuse to continue beyond the 150-volt shock (when the learner first demands to be released), and no one saw themselves going beyond 300 volts (when the learner refuses to answer). Their predictions of others' behavior were only slightly less optimistic: respondents expected that only a pathological fringe of 1-2 percent of the population would proceed all the way to 450 volts. The psychiatrists Milgram surveyed thought that only one experimental subject in a thousand would proceed to the end of the shock board.

6.2.3 Results

Milgram's experiment shows something surprising, not only about our dispositions to obey but also about our self- understanding. The predictions of psychiatrists, students, and lay people fell shockingly far from reality. In the actual experiment, 65 percent of subjects complied fully, eventually administering the 450-volt shock three times to a silent and apparently lifeless victim. Most subjects protested and showed obvious signs of anxiety and reluctance - but ultimately, they did what they were told.

Milgram followed up the experiment with mailed surveys to participants. Despite the stress involved in the experiment, virtually no one regretted participating. Those who hear of the experimental design, without having participated, usually think, 'People will not do it', and then, 'If they do it they will not be able to live with themselves afterward.' But in fact, Milgram reports, obedient subjects have no trouble living with themselves afterward, because these subjects by and large rationalize their behavior, after the fact, in the same way they rationalized it in the course of the experiment: they were just following orders.[11]

6.2.4 The dangers of obedience

What lessons can we draw from Milgram's results? One important lesson, the one most prominently advanced by Milgram himself, is that of the danger inherent in institutions of authority. Because most individuals are willing to go frighteningly far in satisfaction of the demands of authority figures, institutions that set up recognized authority figures have the potential to become engines of evil. Milgram draws the parallel to Nazi Germany. Adolf Hitler, working alone, could perhaps have murdered a few dozen or even a few hundred people. What enabled him to become one of history's greatest murderers was the socially recognized position of authority into which he maneuvered himself and the unquestioning obedience rendered him by millions of German subjects. Just as none of Milgram's subjects would have decided on their own to go out and electrocute anyone, very few Germans would have decided, on their own, to go out murdering Jews. Respect for authority was Hitler's key weapon. The same is true of all of the greatest man-made evils. No one has ever managed, working alone, to kill over a million people. Nor has anyone ever arranged such an evil by appealing to the profit motive, pure self-interest, or moral suasion to secure the cooperation of others - except by relying on institutions of political authority. With the help of such institutions, many such crimes have been carried out, accounting for tens of millions of deaths, along with many more ruined lives.

It is possible that such institutions also serve crucial social functions and forestall other enormous evils. Even so, in light of the empirical facts, we must ask whether humans have too strong a disposition to obey authority figures. This brings us to a closely related lesson suggested by Milgram's results: most people's disposition to obey authorities is far stronger than one would have thought at first glance - and far stronger than one could possibly think justified.

6.2.5 The unreliability of opinions about authority

Another interesting lesson is this: the experience of being subjected to an authority has a distorting influence on one's moral perceptions. Everyone who hears about the experiment correctly perceives the moral imperative, at some point, to reject the experimenter's demands to continue shocking the victim. No rational person would think that complete obedience to the experimenter was appropriate. But once a person is in the situation, he begins to feel the force of the experimenter's demands. When Milgram asked one obedient subject why he did not break off the experiment, the subject replied, 'I tried to, but he [indicating the experimenter] wouldn't let me.' The experimenter in fact exercised no force to compel subjects to continue - yet subjects felt compelled. By what? By the sheer authority of the experimenter. Once a person has been subjected to this authority and has obeyed, the distortion of ethical perception often continues. The subject continues to find his actions justifiable or excusable on the grounds that he was just following orders - even though no one outside the experiment would agree.

The parallel to Nazi Germany again asserts itself. While almost all outside observers condemn the actions of the Nazis (and not just those of Adolf Hitler, who gave the ultimate orders), Nazi officers famously defended themselves with the appeal to superior orders. Was this simply an insincere ploy to escape punishment? Probably not; like Milgram's subjects, the officers probably felt that they had to obey orders. In Hannah Arendt's memorable description of the case, Adolf Eichmann thought he was doing his duty by obeying the law of Germany, which was inextricably tied to the will of the Fuhrer; he would have felt guilty if he didn't follow both the letter and the spirit of Hitler's orders.[12] Even more clearly, average soldiers in the German army cannot be supposed so much more evil than typical non-Germans that they independently wanted to participate in a genocide. While anti-Semitism was rampant in Germany, it did not issue in widespread murder until the government ordered the killings. Only then did ordinary soldiers feel the killings to be justified or required.

History records many similar cases. During the Vietnam war, an American army unit carried out the massacre of hundreds of civilians at My Lai. In one of the most notorious war crimes in the nation's history, defenseless women, children, and old men were gathered together and shot en masse. Again, the soldiers involved pled that they were only following orders.[13] One soldier reportedly cried during the massacre yet continued firing.[14]

The widespread acceptance of political authority has been cited as evidence of the existence of (legitimate) political authority. The psychological and historical evidence undermines this appeal. The Nazis, the American soldiers at My Lai, and Milgram's subjects were clearly under no obligation of obedience - quite the contrary - and the orders they were given were clearly illegitimate. From outside these situations, we can see that. Yet when actually confronted by the demands of the authority figures, the individuals in these situations felt the need to obey. This tendency is very widespread among human beings. Now suppose, hypothetically, that all governments were illegitimate and that no one was obligated to obey their commands (except where the commands line up with preexisting moral requirements). The psychological and historical evidence cannot show whether this radical ethical hypothesis is true. But what the evidence does suggest is that if that hypothesis were true, it is quite likely that we would still by and large feel bound to obey our governments. That is likely, because even people who are subjected to the clearest examples of illegitimate power still typically feel bound to obey. And if we felt this requirement to obey, it is likely that this would lead us to think and say that we were obliged to obey and then - in the case of the more philosophically minded among us - to devise theories to explain why we have this obligation. Thus, the widespread belief in political authority does not provide strong evidence for the reality of political authority, since that belief can be explained as the product of systematic bias.

6.3 Cognitive dissonance

According to the widely accepted theory of cognitive dissonance, we experience an uncomfortable state, known as 'cognitive dissonance', when we have two or more cognitions that stand in conflict or tension with one another - and particularly when our behavior or other reactions appear to conflict with our self-image.[15] We then tend to alter our beliefs or reactions to reduce the dissonance. For instance, a person who sees himself as compassionate yet finds himself inflicting pain on others will experience cognitive dissonance. He might reduce this dissonance by ceasing to inflict pain, changing his image of himself, or adopting auxiliary beliefs to explain why a compassionate person may inflict pain in this situation.

Festinger and Carlsmith provided one of the classic illustrations of cognitive dissonance theory in an experiment conducted in the 1950s.[16] Subjects were made to perform a boring, repetitive task for an hour, which they thought was the core of the experiment they had volunteered for. At the end of the hour, one of three things happened. Subjects in the 'One Dollar' condition were paid a dollar to tell someone (supposedly another volunteer subject coming in) that the task had been fun and interesting. Subjects in the 'Twenty Dollars' condition were paid twenty dollars to say the same thing. Finally, subjects in the Control condition were not asked to say anything and did not say anything of the kind. Later, all three groups were interviewed regarding what they really thought of the repetitive task they had performed for an hour. Subjects in the Twenty Dollars condition had slightly more favorable views than those in the Control group - both of how enjoyable the task was and of how willing they would be to participate in a similar experiment in the future. Subjects in the One Dollar condition, however, had significantly more favorable views of these things than either the Control group or the Twenty Dollars group. Thus, paying the subjects more had resulted in a smaller change in their attitudes toward the task.[17]

Festinger and Carlsmith explain the results as follows. Most people do not generally think of themselves as liars. Therefore, if they found the task they had performed boring, yet they remember that they told someone it was enjoyable, they will experience cognitive dissonance. If the task wasn't enjoyable, why had they said it was? Subjects in the Twenty Dollars condition could readily explain this to themselves: they lied to get the money. But for subjects in the One Dollar condition, this explanation was less satisfying. Because one dollar is a small amount of money, it seemed less adequate as a motivation to lie.[18] Therefore, subjects in the One Dollar condition were under more pressure to believe that the task was actually enjoyable.

In another experiment, volunteers were gathered to participate in a discussion group on sexual psychology.[19] Each volunteer was subjected to one of three conditions: subjects in the Mild condition underwent a slightly embarrassing initiation requirement to join the group (they had to read some sexual but not obscene words out loud). Subjects in the Severe condition underwent a highly embarrassing initiation requirement (they had to read obscene words out loud, followed by pornographic passages). Subjects in the Control condition had no initiation requirement. All subjects then listened to a recording, supposedly of a group discussion in progress. The discussion had been intentionally designed to be as boring and worthless as possible. Subjects were then asked to evaluate the discussion group. Intuitively, one might expect that the embarrassing initiation would have left those in the Severe condition with negative feelings, leading to harsher evaluations of the discussion group. In fact, subjects from the Severe condition had significantly higher opinions of the discussion group than those in either the Control or the Mild condition.[20]

These and other studies show that people tend to adjust their beliefs and values so as to make themselves and their own choices appear better.[21] The same was true of Milgram's subjects. Before participating in the experiment, almost no one would regard obedience in such a scenario as morally acceptable. But afterwards, many obedient subjects found their behavior acceptable.

This psychological principle generates a bias in favor of recognizing political authority. Almost all members of modern societies have frequently submitted to the demands of their governments, even when those demands required actions that they would otherwise be strongly disinclined to perform. For example, most have paid very large amounts of money to the state in satisfaction of its taxation demands. How do we explain to ourselves why we obey? We could explain our behavior by citing fear of punishment, habit, the drive toward social conformity, or a general emotional drive to obey whoever holds power. But none of those explanations is emotionally satisfying. Much more pleasing is the explanation that we obey because we are conscientious and caring citizens, and we thus make great sacrifices to do our duty and serve our society.[22] Philosophical accounts of political authority seem designed to bolster just that image.

One reason for doubting that view of our reasons for obedience is that highly conscientious and caring individuals might be expected to donate large amounts of money to famine relief organizations and other such (nongovernmental) charity groups. The arguments for an obligation to donate to such charities are far more compelling than the arguments for political obligation.[23] Yet for most people, extremely large 'sacrifices' are typically made only when they are commanded by an authority figure and those commands are backed up by a serious threat of punishment. Very few people voluntarily give to charity anything like the amount of money they give to the state.

But whether or not our behavior is motivated by compassion and a sense of duty, it is likely that we would generally wish to believe that it is. To believe this, we must accept a basic doctrine of political obligation, and we must accept the legitimacy of our government.

6.4 Social proof and status quo bias

'Social proof' is an ironic phrase meant to describe the persuasive effect on an individual of the expressed opinions of a group.[24] In one classic experiment, Solomon Asch gathered subjects for what they thought was a test of visual acuity.[25] Each subject was seated in a room with several other people, supposedly other subjects like himself. The group was shown a series of cards, each of which had a single, vertical line on the left (the 'standard line') and three comparison lines on the right. The subjects' task was to identify which comparison line was the same length as the standard. The people in the room were to report their visual judgments in series, out loud, while the experimenter recorded them.

In fact, the experimenter had spoken to all the people but one beforehand, instructing them to give identical, incorrect answers for twelve out of eighteen line comparison questions. The one naive subject was unaware of this and took the others' reported answers to express the actual beliefs of the other members of the group. The purpose was to observe how the naive subject would react to the conflict between the evidence of his senses and the unanimous opinion of the group.

The line comparisons were chosen such that, under normal circumstances, people would be over 99 percent reliable at the comparison task. Under the misleading influence of the group, however, naive subjects' reliability fell to 63 percent. Three quarters of naive subjects gave in to group pressure on at least one of the twelve questions. In postexperimental interviews, Asch identified three reasons for this. Some subjects believed the group was wrong but went along with what the group said out of fear of standing out or of looking bad in front of others. These subjects were simply lying. Avery few other subjects gave every appearance of being unaware that anything had gone wrong - as far as the experimenters could tell, even after the subjects were informed of the nature of the experiment, these subjects thought that the group's answers visually appeared correct.

However, among those subjects who went along with the majority's errors at least some of the time, the most common reason was that they thought that the group must have been correct and that their own visual perception must have been somehow defective. This is not an unreasonable thing to think. It is more likely, on the face of it, that one's own vision is somehow defective than that the seven other people in the room are all lying or misperceiving.

Our concern, however, is not with the question of what one ought to think if one finds oneself in such a bizarre situation. My aim in recounting this experiment is to make salient the very strong influence that the beliefs and attitudes of others around us have upon our own beliefs and attitudes. The Asch experiment provides a particularly striking illustration of that influence. But this sort of influence is doubtless already familiar to the reader.

Closely related to social proof is the phenomenon of bias toward the status quo. Social proof convinces us that what others believe must be true. Status quo bias convinces us that what our society practices must be good. The most obvious and powerful demonstration of both forces is provided by the phenomenon of culture. Many of the world's cultures include beliefs and practices that strike us as bizarre, absurd, or horrible, such as the belief that air and moisture mated to create earth[26] or the practice of cannibalism or human sacrifice. Yet the members of those societies generally embrace their cultures' beliefs and regard their cultures' practices as obviously correct. It would be missing the point to say, 'Well, people in other societies must be terribly benighted.' Outsiders would doubtless regard many of our culture's beliefs and practices as bizarre, absurd, or immoral (in some cases, rightly so). The conclusion to draw is that human beings have a powerful tendency to see the beliefs of their own society as obviously true and the practices of their own society as obviously right and good - regardless of what those beliefs and practices are.[27]

What does this tell us about the belief in political authority? Government is an extremely prominent and fundamental feature of the structure of our society. We know that people tend to have a powerful bias in favor of the existing arrangements of their own societies. It therefore stands to reason that, whether or not any governments were legitimate, most of us would have a strong tendency to believe that some governments are legitimate, especially our own and others like it.

6.5 The power of political aesthetics

Modern governments rely on a rich collection of nonrational tools, including symbols, rituals, stories, and rhetoric, to induce in citizens a sense of the government's power and authority.[28] This sense of authority is emotional and aesthetic rather than intellectual, but it can be expected to influence our conscious beliefs through our intuitions.

6.5.1 Symbols

Every national government in the world has a flag. Most have national anthems. Governments adorn their currencies with various symbols; in the United States, for example, the one dollar bill bears a portrait of George Washington, the Treasury Department Seal, and the Great Seal of the United States. Statues and monuments commemorate important people and events in the nation's history.

What function do all these symbols serve? Why not convey the relevant information in a purely intellectual, aesthetically neutral way? Instead of the Great Seal of the United States, the one dollar bill could simply bear the words, 'This is American money.' Instead of flying the American flag, government buildings could display a sign reading, 'This is a U.S. government building.' In place of monuments, books could be made available that dispassionately describe the relevant historical events. Why would these alternatives be less satisfactory than the symbols actually used? The answer is that the symbols are used to create a sense of national identity through appeal to the audience's emotions.

Uniforms are another kind of symbol, used to adorn the bodies of government officials. Police officers wear uniforms with badges. Judges wear long black robes. Soldiers wear uniforms with rank insignia. All of these are symbols of the particular kind and degree of authority that the government agent is supposed to possess. It would not suffice for the agent to simply wear a sign like a name tag, reading 'judge' or 'police officer' or 'captain'. These signs would convey the cognitive content but not the emotional or aesthetic content of the special attire actually used. Ajudge's robes make the viewer feel a certain way - they engender a feeling of respect and a sense of the wearer's authority. Psychologists have found that the mere wearing of a uniform, even a made-up uniform with no real significance, increases the obedience of others to the wearer's commands.[29]

Screenshot
Figure 6.1 The Colorado state capitol building

Architecture can also be used to symbolize power and authority. Figure 6.1 shows the Colorado state capitol building, which is typical of capitol buildings in the United States. The architecture is mostly of a traditional, classical style, with thick stone columns in the front. These columns are not needed to hold anything up; they exist for aesthetic and emotional effect, most likely to give the building a solid and traditional appearance and therefore to associate the government with stability and tradition. In front of the building is a statue of a soldier, reminding visitors of those who have fought on behalf of the state. To the left and right are (nonfunctional) cannons, symbolizing the state's military might. The building is set on a hill so that visitors look up at the building as they approach and must climb a set of stairs to reach the door. The doors are much larger than a human being, and once inside, the visitor confronts vaulted ceilings three or four times higher than the typical human being. There are many buildings in Denver much larger than the capitol building but perhaps none that is so successful at making the visitor feel small. All of this emphasizes the power of the state and creates a disposition toward respectful submission on the part of the visitor.

Screenshot
Figure 6.2 A Colorado courtroom

Figure 6.2 shows the inside of another interesting government building, a courtroom. The judge is placed, front and center, on a platform, enabling him to literally look down on all the other occupants of the room. This is not the only conceivable way to arrange a courtroom - for instance, the witness stand could have been placed at the center, so that all attention would be focused on the witnesses. Or the judge, prosecutor, defendant, and jury could all be seated in a circle. But these alternative arrangements would not create the desired sense of the power and authority of the judge.

6.5.2 Rituals

In many societies, special rituals are felt necessary when a new leader accedes to power. When power is passed to a new U.S. president, a public swearing-in ceremony is conducted. The ceremony involves a specific, stylized series of poses and words. The new president places his left hand on the Bible, suggesting divine oversight of the proceedings, and raises his right hand with the arm bent at the elbow. He then repeats the exact words of the judge, usually the Chief Justice of the Supreme Court, administering the oath: 'I, [full name of the president-elect], do solemnly swear that I will faithfully execute the Office of President of the United States, and will to the best of my ability preserve, protect, and defend the Constitution of the United States.' Immediately after the oath, the Chief Justice addresses the new president as 'Mr. President'. The oath is followed by a speech and a parade.

What function does this ritual serve? On the surface, the function is to ensure that the new president will serve faithfully and preserve the Constitution. But this is a very weak method of attempting to ensure that outcome. If a president has it in mind to serve 'unfaithfully' or to violate the Constitution, it is unlikely that his memory of having promised not to do so will be the force that stays him. The swearing-in ceremony is mostly for emotional effect. It is like a magic spell that confers power and authority on the new president so that, just as he completes the words of the oath, the person is converted into a president.

If a government is to secure the semblance of authority, its members must be set apart from and above the ordinary run of people. They must not be seen as simply ordinary people who have somehow managed to convince the people with guns to force everyone else to obey them. Rituals like the swearing-in ceremony help to hang the necessary veil over the elites. The exact form of the rituals does not matter; what matters is that there be some recognizable rituals connected with the exercise of power. In primitive societies, these rituals are thought to actually tap magical power. Among modern viewers, the rituals have their effect on an emotional, semiconscious level.

Another context replete with symbol and ritual is the courtroom. Occupants are required to stand when the judge enters the room, symbolically recognizing the judge's superior authority. Solemn oaths are administered to jurors and witnesses, often including the words 'so help me God', invoking divine oversight of the proceedings. Witnesses sit in a special box next to the judge, dubbed 'the witness stand'. Rather than being referred to by name, the judge is referred to as 'your honor' or 'the court', the lawyers as 'counsel', and the accused as 'the defendant'. A great deal of other specialized language is used in preference to plain English. Acomplex set of rules must be followed regarding who is allowed to talk at any given time and what they are allowed to talk about. Everything proceeds in a specific, preordained order. None of these things is demanded by the utilitarian requirements of a procedure designed to decide whether and how to punish someone. Their function is to ritualize the whole process. Further rituals are followed when the jury returns, the verdict is read, and the judge sentences the defendant. The ceremony concludes with the bang of the judge's gavel.

Why are courts so ritualistic? Perhaps because it is here that the state is most concerned to portray its coercion as justice. It is here that the state's agents most directly confront those who have disobeyed the state and here that those agents directly order severe harms to be inflicted upon specific individuals as punishment for disobedience. The process must not be seen as just a group of people who have decided to hurt another person because they don't like something he did. The rituals create a sense of the judge's authority and of the whole process as something profound, sophisticated, and worthy of respect - and something governed by rules that go beyond the mere desires of the actual human beings carrying out the process.

6.5.3 Authoritative language

One underappreciated aspect of political aesthetics is the peculiar language used by authority figures. Consider the following paragraph from the United States Code:

If two or more members of the same family acquire interests in any property described in paragraph (1) in the same transaction (or a series of related transactions), the person (or persons) acquiring the term interests in such property shall be treated as having acquired the entire property and then transferred to the other persons the interests acquired by such other persons in the transaction (or series of transactions). Such transfer shall be treated as made in exchange for the consideration (if any) provided by such other persons for the acquisition of their interests in such property.[30]

Admittedly, this is taken from a part of the law with a particular reputation for incomprehensibility, the Internal Revenue Code.

Here is a more comprehensible law:

No person shall place, use, keep, store, or maintain any upholstered furniture not manufactured for outdoor use, including, without limitation, upholstered chairs, upholstered couches, and mattresses, in any outside areas located in the following places:

(1) In any front yard;
(2) In any side yard;
(3) In any rear yard or other yard that is adjacent to a public street. However, an alley shall not be considered a 'public street'
for the purpose of this subsection; or
(4) On any covered or uncovered porch located in or adjacent to any of the yards described in paragraphs (1) through (3) above.[31]

The writing of lawyers, judges, and lawmakers is so distinctive that it is often referred to as 'legalese', as if it were a language of its own. This language has a distinctive tone that is highly formal, dispassionate, and technical. Sentences are typically long and abstract, with multiple clauses. In the Internal Revenue Code example above, the first sentence is 69 words long (for comparison, the average sentence in this book contains 21 words). There are frequent cross-references to other laws. There are often long and seemingly redundant disjunctions and conjunctions, such as 'covered or uncovered porch' and 'place, use, keep, store, or maintain'. Technical jargon appears frequently, such as 'probable cause', 'due process', and 'term interests'. Ordinary words are sometimes used in technical senses, as with 'consideration' and 'discovery'. Archaic usages are preserved, as in the use of 'such' in 'such transfer', or terms such as 'aforementioned' and 'herewith'. The technical vocabulary frequently relies on Latin or other foreign languages, as with the terms mens rea, certiorari, and en banc.

What is the effect of this peculiar way of speaking and writing? First and most obviously, that laws and legal documents are frequently incomprehensible to ordinary people - one must hire a trained professional to interpret them. Our inability to understand the law may make us reluctant to question it, while the very incomprehensibility of the law confers an air of sophistication and superiority on both the law and the lawmakers. People tend to feel respect for things they cannot understand, as well as for the people who deal with those things. This sort of respect is important if one is trying to convince others to accede to one's dominion.

Another effect of legal language is to emotionally distance the writer, both from his subject matter and from his audience. The writer may wish to distance himself from the audience to maintain a sense of superiority. The writer may wish also to emotionally distance himself from his subject matter, because the matter at hand, in legal writing, involves commands issued by agents of the state to other human beings, backed up by threats of violence against those who disobey. Normally, ordering harm to be imposed by force on other people would be a stressful occupation (whether or not the victims deserve to be harmed). The abstract, technical language helps the audience and the author forget that this is what is happening, and it drains away the emotional impact of issuing coercive threats against other people.

Similar language is often used by theorists engaged in devising recommendations and justifications for the exercise of power. The most respected contemporary political philosophers usually employ language reminiscent of legalese. Consider a representative passage from the most celebrated political thinker of recent times, John Rawls:

I should now like to comment upon the second part of the second principle, henceforth to be understood as the liberal principle of fair equality of opportunity. It must not then be confused with the notion of careers open to talents; nor must one forget that since it is tied in with the difference principle its consequences are quite distinct from the liberal interpretation of the two principles taken together. In particular, I shall try to show further on (§17) that this principle is not subject to the objection that it leads to a meritocratic society. Here I wish to consider a few other points, especially its relation to the idea of pure procedural justice.[32]

The tone in this sort of philosophical work is formal and emotionless. The prose is filled with solemn-sounding technical terms, such as 'pure procedural justice', 'the liberal principle of fair equality of opportunity', and so on. A great deal of attention is given to abstractly described procedures and to connections and distinctions among abstract principles. In the case of Rawls, there are frequent cross-references, as in the above reference to section 17. Some of the vocabulary is slightly archaic, as in the above use of 'henceforth'. Extra words are employed so that a statement may be made in a weaker or less direct manner, as with the above use of 'I should now like to comment ... '. The entire above quotation serves as a literary throat clearing, a preparation for discussing what the author actually wants to discuss. All of this has the effect of draining the discussion of emotional import - or more precisely, of directing the reader's mind toward more orderly and tractable feelings.

I do not claim that Rawls or other philosophers have consciously sought to achieve these effects with their writing. What I claim is that certain styles of writing, exemplified by typical legal documents as well as some philosophical work, have the effect of softening emotional obstacles to the acceptance of state authority and of encouraging attitudes of respect and submission to conventional institutions of power. They serve to dress up the discussion of who should be subjected to violence in somber and civilized clothes.

6.6 Stockholm Syndrome and the charisma of power

6.6.1 The phenomenon of Stockholm Syndrome

Stockholm Syndrome is named after an incident that occurred in Stockholm, Sweden, in 1973. Apair of bank robbers held four bank employees hostage for six days. During the ordeal, the hostages bonded emotionally with their captors, came to side with the kidnappers against the police, and seemingly did not want to be rescued. At one point, a hostage said that the robbers were protecting them from the police. On the last day, as the police used tear gas to force everyone out of the bank, the hostages refused to leave without the kidnappers, fearing that if they did so, the police would shoot the kidnappers. After the incident was over, the victims continued to sympathize with and defend the criminals.[33] Since then, the term 'Stockholm Syndrome' has been used to describe the emotional bond that victims sometimes form with kidnappers.[34] The term is also often extended to a wider class of cases in which a person or group is subjected to the control of another.

Amore extreme case was that of Patricia Hearst, who was kidnapped in 1974 by a left-wing terrorist group in California calling itself the Symbionese Liberation Army. For two months, Hearst was held prisoner in a closet and physically and sexually abused. She then joined the group and voluntarily helped them carry out crimes, including a bank robbery. She made no attempt to escape when presented with the opportunity. Following her eventual capture by the police, Hearst claimed to have been brainwashed by the SLA.[35]

Amore recent case is that of Jaycee Lee Dugard, who was kidnapped at the age of eleven by ex-convict Phillip Garrido. Garrido raped her and held her captive in a shed in his backyard. Police finally found Jaycee in 2009, eighteen years after the kidnapping. She had lived with Garrido all those years and now had two daughters fathered by Garrido. During that time, Dugard had assisted Garrido with his home business, sorting orders by phone and email. She had met customers alone at the door. She had even gone out in public. In short, Jaycee Dugard had numerous opportunities over the years to escape or seek outside help, but she never did.[36] So secure was Garrido in his relationship with Dugard that he brought her and their daughters with him to a meeting with his parole officer. At that meeting, Dugard told the officer that Garrido was a great person, and she attempted to protect him by concealing her own identity.[37]

Anumber of similar cases have occurred over the years. There are commonly said to be four precursors for the development of Stockholm Syndrome: first, the captor poses a credible threat to the life of the captive victim. Second, the victim perceives some form of kindness on the part of the captor. However, this 'kindness' might consist only in a relative lack of abuse or a failure to kill the victim. At one point during the Stockholm hostage crisis, one of the bank robbers was planning to shoot a hostage in the leg to make the police take his demands more seriously (the shooting never in fact occurred). At the time, the hostage who was to be shot thought that the robber was kind for planning only to shoot him in the leg and not to kill him.[38] Third, the victim is isolated from the outside world and subjected only to the captor's perspective. Fourth, the victim sees himself as unable to escape.

Under these conditions, hostages are prone to a variety of reactions that outsiders find paradoxical, including

Some have suggested that victims regress to an infantile state, with the captor serving as a parental figure.[40]

6.6.2 Why does Stockholm Syndrome occur?

There has been little academic study of the phenomenon, partly because psychologists cannot recreate it in the laboratory, and theoretical accounts are speculative. But one plausible account ascribes the syndrome to an unconscious defensive mechanism. When one is completely under the power of a dangerous person, one's survival may depend upon developing traits pleasing to one's captor. This includes a submissive dependence, as well as feelings of sympathy and liking towards the captor. Victims do not consciously choose to adopt these traits, nor do they merely pretend to adopt them. They simply find themselves having these emotions and attitudes.[41] If this is a survival mechanism, there is evidence that it is effective: after his capture, one of the Stockholm robbers reported that he was unable to kill any of the hostages due to the emotional bond he had formed with them.[42] For this reason, the FBI deliberately encourages the development of Stockholm Syndrome in hostage situations.[43]

The existence of such a defensive mechanism can be explained in evolutionary terms: during the history of the species, it has been common for a person or group to hold a great deal of power over others. Those who displeased the powerful person or group were likely to be killed or otherwise harmed. Those who pleased the powerful were more likely to survive and prosper from the powerful persons' favor. It is plausible to suppose that Stockholm-like characteristics would be pleasing to powerful persons. Therefore, evolution may have selected for a tendency to develop such traits in appropriate circumstances.

6.6.3 When does Stockholm syndrome occur?

We can explain in terms of the above theory the conditions under which Stockholm Syndrome is most likely to develop. These conditions include the following:[44]

  1. The aggressor poses a credible and serious threat to the victim. It is this situation that makes some defensive mechanism necessary. Stockholm Syndrome involves extreme shifts in the victim's attitudes, which have serious potential costs (for example, one might wind up joining in the aggressor's terrorist plans). We should therefore expect these changes to occur only when there is a serious threat.[45]
  2. The victim perceives himself as unable to escape. Victims who could escape would prefer that option over bonding with the aggressor.
  3. The victim is unable to overpower the aggressor or to effectively defend himself against the aggressor. If one has the option of neutralizing the aggressor's power, this would clearly be preferable to bonding with the aggressor.
  4. The victim perceives some kindness from the aggressor, even if only in the form of lack of abuse. It is this circumstance that renders it likely that a strategy of bonding with the aggressor could succeed. Aggressors who are purely abusive are not likely to be won over if the victim develops a liking for them.
  5. The victim is isolated from the outside world. When a person or group of people are held captive by an aggressor, outsiders, who have no reason to develop Stockholm Syndrome, will typically view the aggressor in an extremely negative light. In any communications with the victims, outsiders are likely to express negative thoughts and feelings about the aggressor. Stockholm Syndrome is therefore most likely to develop when this sort of contrary influence is absent.

Though the term 'Stockholm Syndrome' arose in the context of situations involving hostage taking or kidnapping, the above conditions may obtain in a variety of cases. In any situation in which these conditions obtain, we may expect to find a similar phenomenon involving identification by the victim with the aggressor. The more clearly and fully the conditions are satisfied, the more likely such an identification is. Accordingly, Stockholm-like symptoms have been observed in a variety of groups, including concentration camp prisoners, cult members, civilians in Chinese communist prisons, pimp-procured prostitutes, incest victims, battered women, prisoners of war, and victims of child abuse.[46]

6.6.4 Are ordinary citizens prone to Stockholm syndrome?

Are citizens of well-established governments susceptible to Stockholm Syndrome? Consider the above five conditions:

  1. The aggressor poses a credible and serious threat to the victim. All modern governments control their populations through threats of violence. In some cases, their capacity for violence is astonishing. The U.S. government, for instance, may possess sufficient weaponry to kill everyone in the world. On a smaller scale, governments have an apparatus for imprisoning individuals for long periods of time, and they deploy this apparatus on a regular basis. For those who resist capture, governments have impressive tools of physical force, up to and including deadly force.
  2. The victim perceives himself as unable to escape. Escape from one's own government tends to be difficult and costly, typically requiring an abandonment of one's family and friends, one's job, and one's entire society. Even those willing to undertake such costs will generally then only become subject to another government. Escape from government in general is virtually impossible.
  3. The victim is unable to overpower the aggressor or to effectively defend himself against the aggressor. It is virtually impossible for any individual to effectively defend himself against most modern governments, to say nothing of overpowering them.
  4. The victim perceives some kindness from the aggressor, even if only in the form of lack of abuse. Most citizens perceive their government as beneficent in light of the social services that it provides. Some also feel that their government is good because it does not abuse its power as much as most other governments throughout history.
  5. The victim is isolated from the outside world. In the case of citizens of a modern nation-state, perhaps the 'outside world' consists of foreign countries. Most people, particularly in advanced, liberal democracies, have access to the perspectives of the outside world in this sense, if they choose to consult those perspectives. There are, however, at least two reasons why this fact might fail to prevent the development of Stockholm Syndrome. First, our actual use of these outside perspectives is limited. Most people obtain the great majority of their information from sources within their own country. Second, the outside sources are all in a similar situation. It is as though the hostages had access only to the 'outside perspectives' of hostages and hostage takers in other places. In such a situation, it is not clear that access to these perspectives would retard the development of Stockholm Syndrome.

The general precursors for the development of Stockholm Syndrome, then, are reasonably well satisfied in the case of citizens of modern states. It is therefore not surprising to find that citizens tend to identify with their governments, adopt their governments' perspectives, and develop emotional attachments (often considered 'patriotism') to their governments.[47] Just as Stockholm victims tend to deny or minimize their captors' acts of coercion, many citizens tend to deny or minimize their government's coercion. Nearly all theorists who consider the question at all agree that government is a coercive institution,[48] yet discussions of policy issues rarely address the justification for using force to impose various policies. It is not that we fail in general to ascribe moral significance to coercion; if decisions on the part of some nongovernmental agent were under discussion, then the question of the justification for violence would take center stage. But either the reality or the moral significance of coercion fades from view when the agent involved is the state. Deferential attitudes may extend to wholesale acceptance of the state's image of itself as having a unique entitlement to coerce obedience and as capable of creating moral obligations simply by issuing commands. Due to the Stockholm dynamic, power has a self-legitimizing tendency: once it becomes sufficiently entrenched, power is perceived as authority.

Those who accept the legitimacy of government may find it difficult to believe that they are under the influence of something like the Stockholm Syndrome, since that concept is typically applied to situations in which the aggressor's role is generally socially condemned - kidnappers, bank robbers, violent spouses, and so on. All of these kinds of aggression are bad, and most people see their governments as good; therefore, the concept of Stockholm Syndrome can't apply to our feelings about government, can it?

Of course, this reaction could itself be a product of the syndrome. Fortunately, we need not first decide whether government is good or its coercion justified to decide whether the concept of Stockholm Syndrome applies. The precursors for the development of the syndrome identified in Section 6.6.3, as well as the manifestations of the syndrome described in Section 6.6.1, are nonmoral, factual conditions. The badness of the aggressor or the unjustifiedness of his coercion is not among them. And the citizens of modern states in fact tend to satisfy these descriptive conditions. This we can see regardless of whether government is ultimately justified or not.

6.7 Case studies in the abuse of power

6.7.1 My Lai revisited

Recall the case of the massacre at My Lai. Most of the soldiers who were ordered to murder civilians obeyed. A few refused to participate in the massacre, while doing nothing to stop others from doing so. According to one report, there were many more soldiers who simply avoided the area where the massacre was taking place, possibly to avoid being asked to participate.[49] Thus, the vast majority of people who were aware of the massacre, whether they participated or not, did nothing to stop it. The exception was one brave helicopter team, which saved a small number of villagers from being killed by flying them to safety. Everyone else in the village was killed.

Now consider the reaction on the part of U.S. government officials and other Americans. After it happened, the U.S. government attempted to cover up the massacre and to protect the soldiers responsible for it. Only after the story was leaked to the press did the government move toward prosecuting war criminals. In the end, a single person was convicted and sentenced for the massacre, Lieutenant William Calley, who ultimately spent three years under house arrest. Hugh Thompson, the heroic helicopter pilot who saved some of the civilians, was initially treated like a criminal. Thompson had landed his helicopter between a group of civilians and a group of American troops who were advancing to kill the civilians. He told his two crewmates to shoot the soldiers if they opened fire on the civilians while he was trying to save them. Fortunately, no one opened fire, and Thompson was able to save ten civilians. Back in the States, however, many regarded Thompson as a traitor. He received death threats and mutilated animals on his doorstep. One congressman stated that Thompson was the only person in the whole story who should go to jail.[50]

One lesson from the story is that even those who do not actively participate in abuses of power are often complicit in them. When members of an organization abuse their power, other members commonly look the other way. When given the chance, officials often cover up or excuse abuses. Those rare individuals with the courage to intervene to stop abuses, rather than being hailed as the heroes they are, will more often be reviled as traitors.

All of us are well aware of the atrocities committed by such regimes as Nazi Germany, the Soviet Union, and communist China. It is all too easy, in thinking about such cases, to react by congratulating ourselves on not belonging to any such barbaric and tyrannical regime. Cases like My Lai remind us that it is not only dictatorships that commit atrocities. Advanced, democratic nations commit atrocities, too, albeit with less frequency and on a smaller scale, a fact that leaves us with small ground for self-congratulation.

In focusing on this example, I do not want to leave readers with the impression that it was an isolated incident. When we read of flagrant abuses of power, we also usually read of the official cover-ups. But the only cases we read of are ones in which the cover-ups failed. Presumably, not all cover-ups fail. Sometimes the authorities must succeed in hiding their misdeeds. How often, we do not know. Thompson reports that, after his experience at My Lai, other soldiers told him, 'Oh, that stuff happened all the time.'[51] There is thus reason to suspect that many more massacres occurred that did not make the news.

6.7.2 The Stanford Prison Experiment

In 1971, social psychologist Phillip Zimbardo conducted an illuminating study of the effects of imprisonment on both guards and prisoners.[52] Zimbardo collected 21 volunteers, all male college students, to play the role of either prisoners or guards in a simulated prison. At the start, all the volunteers wanted to play the prisoner role; none wanted to be guards. Zimbardo randomly assigned half the subjects to be prisoners and half to be guards. The prisoners would live in makeshift prison cells for two weeks on the Stanford University campus. The guards would watch over the prisoners in eight-hour shifts, with each guard free to leave when his shift was over each day. The experimenters provided minimal guidance on the treatment of prisoners, apart from instructions concerning the provision of food and the avoidance of physical violence.

What the experimenters observed was a spiraling pattern of abuse on the part of the guards that began almost immediately and worsened each day. Prisoners were subjected to relentless verbal abuse ('You such a self-righteous, pious bastard that I wanna puke', and so on); made to perform tedious, pointless, and degrading tasks ad nauseam (doing pushups with other prisoners sitting on their backs, cleaning the toilet with their hands, and so on); required to verbally insult both themselves and each other; deprived of sleep; confined for hours in a closet with about one square yard of floor space; and finally required to perform simulated sodomy. Not all the guards approved of or participated in the abuse. But the abusive guards assumed de facto positions of dominance among the guards, which no one challenged. The 'good guards' tacitly acquiesced in the behavior of the more aggressive guards, neither doing nor saying anything against them. The ordeal was so stressful and depressing for the prisoners that five had to be released early, and on the sixth day the experimenters found it ethically necessary to terminate the experiment.

6.7.3 Lessons of the SPE

Much worse abuses have occurred in real prisons, prisoner-of-war camps, concentration camps, and the like. The Stanford experiment differs from those real-world prisons in a number of interesting ways. First, all the participants knew themselves to be merely participating in a psychological experiment, which they thought would be over in two weeks, whereupon they would return to their normal lives. Second, the prisoners had been selected randomly, and both the guards and the prisoners knew that the prisoners had done nothing wrong. The prisoners could not have been seriously regarded as criminals or enemies in any meaningful sense. Third, the prisoners and guards had been screened beforehand. The experimenters had given questionnaires and personal interviews to an initial pool of 75 volunteers in order to select only the most normal, psychologically stable participants. Furthermore, on the basis of psychological testing, there appeared to be no significant initial personality differences between the guards and the prisoners.

Any of these conditions might have been expected to insulate the simulated prison against the sort of abuses often seen in real prisons. (Admittedly, the abuses in Stanford look mild in comparison to those seen in Abu Ghraib or the Soviet gulags; then again, the trajectory of increasing abusiveness was cut short after only five days.) One might have thought that the context of a relatively short-term psychological experiment would be insufficient to break down normal standards of decency and respect for others. One might have suspected that prisoner abuse occurs because prisoners are believed to be criminals or enemies, so that the abuse is thought justified. Or one might have thought that prisoner abuse occurs because individuals with sadistic predispositions are more likely to become guards or because prison inmates tend to be unusually aggressive and thus draw out aggressive responses on the part of guards. The Stanford Prison Experiment is of particular interest in that it puts hypotheses like these to the test.

As it turns out, none of these things was the case. There was something about the guard role that brought out the worst in people. Zimbardo's central conclusion, from this study and much other evidence, is that the determinants of good or evil behavior lie more in the situations that individuals are placed into than in those individuals' intrinsic dispositions.[53] An individual's circumstances can have dramatic corrupting or uplifting effects.

What was it about the guard role that brought out subjects' dark side? Lord Acton, I believe, had the right of it: power corrupts.[54] This has long been apparent from history; now we have experimental evidence as well. When some human beings are given great power over the lives of others, they often discover that the sense of power is intoxicating. They want to exercise their power more frequently and more fully, and they don't want to give it up. When the Stanford Prison Experiment was ended prematurely, all the prisoners were relieved. Most of the guards, however, seemed disappointed. They were enjoying tormenting their charges. As Zimbardo reports, none of the guards ever turned up late for a shift, and on several occasions they stayed late, unsolicited and without extra pay.[55] Not everyone is visibly corrupted by the exercise of power. But there are always some who are, and as the experiment suggests, even the least corrupted typically do nothing to restrain the excesses of the most corrupted.

But why inflict suffering and humiliation on the prisoners - couldn't one as well experience power in benevolent gestures? George Orwell had a key insight into this connection: 'How does one man assert his power over another ... ? ... By making him suffer. ... Unless he is suffering, how can you be sure that he is obeying your will and not his own? Power is in inflicting pain and humiliation.'[56] It is certainly true that the Stanford guards intentionally inflicted pain and humiliation on their captives. And again, these guards were perfectly normal human beings, as far as could be ascertained beforehand. There is thus reason to believe that it is no accident that governments have so often been led by tyrannical leaders.

Another lesson of the Stanford experiment concerns the reactions of others to authority figures. The prisoners in the experiment, initially at least somewhat resistant, were reduced to meek submission by the end of the experiment. They complied with nearly all, even the most offensive demands issued by the guards. On the face of it, this is puzzling, as the guards had no real power to compel the prisoners to obey. The guards were prohibited from using violence and in any case were, in each shift, outnumbered three to one by prisoners. If the prisoners had resolutely refused to obey the guards, it is unclear what the guards could have done. Yet the prisoners obeyed, despite the increasingly irrational and offensive nature of the guards' commands and despite the arbitrary nature of their supposed authority. Nor was this obedience to be explained as a result of a sense of contractual obligation. While the subjects had agreed to be part of a simulation of prison life, they had not agreed to obey all guard commands. And even if they thought themselves obligated to be obedient to some extent, this would not explain why the prisoners became more submissive as the study wore on and the guards' demands became more unreasonable. One lesson to draw from this is that psychologically, power is self-validating. Even when the 'authorities' are selected entirely arbitrarily and everyone knows this, the mere assertion of authority tends to be accepted by others.[57] Furthermore, the longer one obeys an authority figure, the more one feels 'bound' to continue to do so.

6.8 Conclusion: anatomy of an illusion

Standard intuitions about authority are not to be trusted. Whether or not one accepts the arguments advanced in the previous chapters of this book, one ought not to place much weight on the mere fact that most people believe in political authority.

Anyone who holds an unpopular view can be challenged to answer, 'How have so many others gone wrong, while you have avoided their error?' This question should be taken seriously. If one's answer is merely that human beings are fallible and that the vast majority of other people have coincidentally made the same error on this particular question, that will generally be implausible for straightforward reasons of probability.

I do not believe that the many who accept political authority have all made this mistake by chance. I believe that there are specific features of the human mind and of the situation most people find themselves in that contribute to a moral illusion of authority. Compare the widespread belief, prior to Copernicus and Galileo, that the sun orbited the earth. This was not a chance error; it isn't that so many people just happened to pick the wrong answer to the question of the structure of the cosmos. There was a common explanation for the erroneous beliefs in so many minds - roughly speaking, it looks as though the sun is moving around the earth. We may characterize this as a perceptual illusion - a case in which there is a systematic tendency for things to appear, to casual observation, otherwise than how they really are. In such cases, we should expect most people to mistakenly assume that things are as they appear, unless and until they are given information to correct the illusion.

Human beings can also suffer from cognitive illusions, in which things appear to the mind (nonperceptually) otherwise than as they are. For instance, a medical procedure with an 80 percent success rate sounds better to most people than a procedure with a 20 percent failure rate. This difference has been shown to make a difference to people's practical judgments about realistic situations.[58] One species of cognitive illusion is of particular interest to us here: that of moral illusions. These are cases in which we have a systematic tendency to see something as right (or wrong) when in fact it is not. Throughout history, our forebears have been subject to widespread moral illusions - for instance, that women were inferior to men or that dark-skinned people were inferior to light-skinned ones.[59] The suggestion that we are still subject to some moral illusions today should therefore surprise no one. We need to reflect on what moral illusions we might be subject to, keeping in mind that, by the nature of the case, they will not seem, on casual consideration, to be illusions.

Overcoming an illusion often requires seeing why things might appear as they do even if the way they appear is false. For instance, in overcoming the belief that the sun orbits the earth, it is important to see why it would appear that way even if the sun was not moving but instead the earth was rotating. Likewise, in overcoming the illusion of political authority, it is important to see why it might seem to us that there is political authority, even if in fact no state has ever had genuine authority.

I have suggested in this chapter that human beings come equipped with strong and pervasive pro-authority biases that operate even when an authority is illegitimate or issues illegitimate and indefensible commands. As we have seen, individuals confronted with the demands of authority figures are liable to feel an almost unconditional compulsion to obey, and this may prompt them to look for explanations for why the authority is legitimate and why they are morally required to obey. People often defer instinctively to those who wield power, and there are even cases in which people emotionally bond with others (such as kidnappers) who hold great but completely unjustified power over them, adopting the perspectives and goals of those who hold the power. Once a pattern of obedience has started, the need to minimize cognitive dissonance favors continued obedience and the adoption of beliefs that rationalize the authority's commands and one's own obedience to them. Due to a general status quo bias, once a practice or institution becomes established in some society, that practice is likely to be viewed by the members of that society, almost automatically, as normal, right, and good.

None of this by itself shows that existing political institutions are illegitimate. But it strongly suggests that they would be widely accepted as legitimate even if they were not. Theories of authority devised by political philosophers can plausibly be viewed as attempts to rationalize common intuitions about the need for obedience, where these intuitions are the product of systematic biases.


Notes

1 Honore (1981, 42-4) expresses this concern in regard to Simmons's philosophical anarchism, a more moderate doctrine than my own.

2 For this reason, DeLue (1989, 1) warns that the widespread acceptance of philosophical anarchism 'would be a tragedy for liberal regimes'. 3 See the Crito at 50d in Plato 2000, and see Hume 1987, 480. Both philosophers appear to be worried that even a small amount of disobedience, perhaps just a single act of disobedience, would lead to this result.

4 Klosko 1992, 24. Klosko makes the point at greater length in his 2005, chapter 9.

5 Hume 1992, section III.ii.9, 552. Hume uses this premise to reject the social contract theory, which in his time held little sway with the public. His strong thesis of moral infallibility may be explained by his antirealist metaethics (1992, Section III.i.1-2).

6 Lindberg 1992, 58; Russell 1991. In the fourth century BC, Aristotle discussed the arguments establishing the earth's sphericity (De Caelo, 297a9-297b20), and in the third century BC, Eratosthenes provided a reasonably accurate estimate of the earth's circumference.

7 See Stove (1995, 58-62) on 'The Columbus Argument' for further discussion.

8 See McLean and Hewitt's introduction to Condorcet 1994 (35-6). Condorcet notes that when we assume individuals are 80 percent reliable and the majority outnumbers the minority by as few as nine persons, the probability of the majority being correct exceeds 99.999 percent. The Jury Theorem may be misleading, because the assumption of probabilistic independence is rarely satisfied. However, a broader qualitative point can be made; namely, that a convergence of information sources on a particular proposition probabilistically supports that proposition, to a greater degree than a single information source would, provided that (i) each source is more reliable than a random guess, (ii) neither source is completely dependent on the other, and (iii) one source is not more likely to agree with the other if the latter source is wrong than if the latter source is correct. It is very plausible that these conditions are commonly satisfied when the sources are individual people.

9 The account that follows in the text is based on Milgram 2009. In addition to the version I describe in the text ('Experiment 5'), Milgram details several other interesting variations on the experiment.

10 Milgram 2009, 27-31.

11 Milgram 2009, 195-6.

12 Arendt 1964, 24-5, 135-7, 148-9.

13 Wallace and Meadlo 1969; Kelman and Hamilton 1989, 10-11.

14 Kelman and Hamilton 1989, 6.

15 See Festinger and Carlsmith 1959 for a seminal defense of the theory. On the particular importance of self- image, see Aronson 1999; Aronson et al. 1999.

16 Festinger and Carlsmith 1959.

17 The largest attitude difference was between the Control group and the One Dollar group on the question of how willing they would be to participate in a similar experiment again. This was a difference of about 1.8 points on a ten-point scale.

18 This was a 1950s dollar; the equivalent today would be about $8. The actual reason the subjects lied was probably deference to the experimenters, but the subjects did not know this.

19 Aronson and Mills 1959.

20 Ratings of the discussion in the Severe condition were 19% higher than in the Mild condition and 22 percent higher than in the Control condition (Aronson and Mills 1959, 179).

21 See Brehm 1956.

22 It might be even more satisfying to believe that obedience to the state is supererogatory rather than obligatory, but this might strain the credulity of even the ambitious self-deceiver - most of us know that we do not generally make great supererogatory sacrifices. It is more believable that we make great sacrifices that are morally required of us.

23 See Singer 1993, chapter 8; Unger 1996.

24 Cialdini 1993, chapter 4.

25 Asch 1956; 1963.

26 From an Egyptian creation myth, discussed in Lindberg 1992, 9.

27 Some philosophers have elevated this bias into a theory of practical reason. MacIntyre (1986) and Murphy (1995) contend that no reason is needed for following the norms of one's society but that a reason for departing from the currently accepted practices is always needed. They do not argue for this assumption, however, and I view their taking of this position as a manifestation of status quo bias.

28 See Wingo (2003) for extended discussion and defense of this thesis.

29 Bushman 1988. The experiment involved having a woman tell people on the street to give a nickel to a motorist for a parking meter. Subjects were more likely to comply when the woman wore an ambiguous uniform than when she was dressed in ordinary clothes (72 percent v. 50 percent compliance, p = 0.01).

30 U.S.C., Title 26, section 2702. I have no idea what the paragraph means.

31 Boulder Revised Code, 5-4-16. This ordinance was passed in response to a Boulder tradition of setting couches on fire after big events such as football games.

32 Rawls 1999, section 14, 73.

33 Graham, Rawlings, and Rigsby 1994, 1-11; Lang 1974.

34 I follow the popular use of the expression 'Stockholm Syndrome'. My use of the word syndrome, however, is not intended to convey that the phenomenon is a disorder or disease.

35 Brook 2007.

36 Fitzpatrick 2009.

37 Shaw 2009, 5-6. The parole officer, finding inconsistencies in their stories, separated Garrido and Dugard and continued questioning them to find out who she was. Eventually, Garrido admitted to having kidnapped Dugard, after which Dugard revealed her identity.

38 Graham 1994, 5.

39 Graham 1994, 13, 42-3.

40 de Fabrique et al. 2007; Namnyak et al. 2008. The victim is typically unable to escape at first but often has opportunities to escape after the syndrome has developed. 41 de Fabrique et al. 2007; Mattiuzzi 2007.

42 Lang 1973, 126.

43 de Fabrique et al. 2007.

44 Items (i), (ii), (iv), and (v) are from Graham et al. 1994, 33-7; cf. de Fabrique et al. 2007; Namnyak et al. 2008, 5. I have added item (iii), which, though not identified as a distinct condition by Graham and others, is clearly present and of import in the classic cases of Stockholm Syndrome.

45 Freud (1937, chapter 9) postulates that when an individual fears pain at the hands of another, the individual may cope with the anxiety by psychologically identifying with the person who poses the threat. She describes this as 'identification with the aggressor'.

46 Graham et al. 1994, 31; Graham et al. 1995; Julich 2005.

47 One interesting manifestation of this identification with the government is the use by private citizens of the word 'we' to refer to the government, as in 'We invaded Iraq in 2003', which might be said by an American even if the speaker had personally done nothing to bring about the invasion or had actively opposed it. Since the word we normally includes the speaker, this suggests strong identification with the state.

48 Edmundson (1998, ch. 4) is a rare exception.

49 Thompson n.d., 19-20. Other details of the case are from Thompson (n.d.) and Kelman and Hamilton (1989, 1-17).

50 Thompson n.d., 12, 27-8. Thirty years later, Thompson was honored with the Soldier's Medal and invited to speak at West Point, Annapolis, and Quantico.

51 Thompson n.d., 11.

52 Zimbardo et al. 1973; Zimbardo 2007.

53 See Zimbardo 2007, esp. 210-21, on the situational factors in the Stanford Prison Experiment. See his chapters 12-16 for evidence and arguments beyond the Stanford Prison study.

54 Acton 1972, 335 (from a letter to Mandell Creighton dated 5 April 1887): 'Power tends to corrupt and absolute power corrupts absolutely. Great men are almost always bad men, even when they exercise influence and not authority: still more when you superadd the tendency or the certainty of corruption by authority.'

55 Zimbardo, Haney, and Banks 1973, 81.

56 Orwell 1984, 219-20. The remarks quoted are from thought police agent O'Brien, the character who captures and tortures the novel's protagonist in order to break his spirit.

57 Milgram (2009, 139-40) notes, similarly, that mere self-designation as an authority figure normally suffices to secure others' obedience.

58 Tversky and Kahneman 1981.

59 See Section 13.1 and Section 13.4 for further examples.

Previous Chapter       Next Chapter

The Problem of Political Authority

An Examination of the Right to Coerce and the Duty to Obey

MichaelHuemer
Michael Huemer

Index

Previous
Chapter

Next
Chapter



AnarchoDollar-sm-tr Hogeye's Anarchism Page

books
Library of Liberty