by Dan Jones
At the beginning of 2005, Ted Haggard’s stock was high and rising. Time magazine had just included him on a ‘Top 25’ chart of influential Evangelical Christian pastors in America. He had the ear of not only a huge public following, but also President Bush and his advisors. You could easily have thought God was on his side.
At the beginning of 2005, Ted Haggard’s stock was high and rising. Time magazine had just included him on a ‘Top 25’ chart of influential Evangelical Christian pastors in America. He had the ear of not only a huge public following, but also President Bush and his advisors. You could easily have thought God was on his side.
But it wasn’t to last. By the end of the following year, his value on the Evangelical market had plummeted. The fatal bombshell hit at the start of November 2006. During a live radio interview, Mike Jones, a 49-year-old fitness fanatic who formerly worked as a masseur and escort, revealed a hidden — and deeply hypocritical — side to Haggard. Jones reported that for the previous three years, and until just three months earlier, he and Haggard had hooked up on a monthly basis to get high on crystal meth and have sex. Haggard paid for both.
For such a prominent Evangelical, one so publicly opposed to gay rights and so vocal in preaching the virtues of clean, family living, this is about a bad as PR can get. It beats a bit of gambling or garden-variety adultery hands down. Haggard, naturally, denied the charges at first, but later capitulated. He soon stepped down as pastor of New Life Church (the megachurch he founded in Colorado Springs 32 years earlier), resigned as president of the National Association of Evangelicals, and dropped out of the limelight. In 2007, Haggard and his family relocated to Phoenix to begin the ‘restoration process’.
Now I know Schadenfreude isn’t the noblest sentiment to nurture or admit to, but I confess that I sometimes find it irresistible. Moralistic blowhards exposed for indulging in the supposed sins that they castigate and denounce other people for succumbing to; that usually does it for me. (Obviously I feel bad for his family; and, if I think about it, Haggard too, who must be a fairly conflicted soul.)
Moral hypocrisy isn’t usually so spectacular. Yet even in its more mundane manifestations — the friend who claims to care about the environment but makes a needless 15-minute drive to the supermarket — it’s still pretty annoying. Double standards generally are.
In recent years psychologists have begun to probe our capacity for moral hypocrisy, and the factors which influence whether we hold ourselves and others to comparable moral standards. Moral hypocrisy can be studied in the lab in a number of ways. One is to have people judge the fairness of their own actions, and to compare that with how they judge seeing someone else perform the very same action. The discrepancy between the two judgments is a measure of moral hypocrisy.
In one study1, Piercarlo Valdesolo and David DeSteno had people decide for themselves which experimental condition they would go in. The choice? Either a brief survey and a game of photo hunt that would take a total of 10 minutes; or a series of logic puzzles followed by a long, tedious mental rotation task that would take 45 minutes. (The study subjects were told that this was to test a new type of experimental design.)
As you’d expect, people largely opted for the easier ride, lumping the other guy with the headache. They were also asked to rate how fairly they believed they had acted in making their allocation, with people being relatively easy on themselves (4 on a 7-point scale, with 1 being totally unacceptable).
Next, Valdesolo and DeSteno had participants watch a video of someone else making the same choice as just described: selecting the easy experimental option. Then they were asked how fairly the person they watched had acted. Now the fairness ratings dropped to three.
Valdesolo and DeSteno added another twist in a further round of experiments. This time, prior to watching the same video as before, participants completed a task in which they had to estimate the frequency a variety of events. They were then told they were either an overestimator or an underestimator. Then came the video, at which point subjects were told one of two things: either that they’d be viewing a fellow member of whichever category they fell into — over- or underestimator — perform the action (choosing with bias), or that they’d watch a member of the opposite group do the same.
Now these are pretty minimal group identities. Yet they had an effect. People tended to be more forgiving of members of their minimal in-group than of out-group members. In fact, people were even more lenient towards an in-group member than they were towards themselves, and correspondingly harsher on an identifiable out-group member than a totally anonymous stranger. Score 1 for bias.
This idea of ‘flexible virtue’ — of a morality that bends and warps to defend the self and one’s social group, making our side morally right, and their side wrong — is a recipe for severe inter-group enmities. And it’s one that’s used frequently to cook up social troubles, if the lessons of history and current affairs are anything to go by. Of course, these lab-based experiments use very simple set ups, and the real world gets a little more complex, but they’re nonetheless a useful start in getting a perspective on the ecology of our moral lives.
Other research has probed the basic mechanics of moral hypocrisy, and the means by which it is achieved. When people can make an unfair choice from which they benefit, and without the risk of punishment or censure, they often will — even if it’s clear to them that they are indeed making an unfair choice. So it’s not like we make a sort of perceptual mistake, that we try to be fair but, by misperceiving the set up, make an unfair choice. No, we do the mental equivalent of closing our eyes, putting our fingers in our ears, and going “La la la la la la…!”. We simply try to avoid any consideration of whether the moral standards we hold others to should apply to our self. Call it motivated denial.
But if our attention is engaged, we do listen. People made to contemplate making the selfish choice in these tasks, and to reflect on how they’d feel if someone else did that, are more likely to act fairly2. Score 1 for the persuasive power of impartiality. (Perhaps the most elementary and important tool in the moral armamentarium.)
It also appears that being a moral hypocrite takes a bit of effort3. When people have been mentally taxed by a puzzle or complex task, they are less likely to show the usual hypocrisy. It’s not that they judge other people more leniently, and treat them as if they were considering their own action. Rather, they seem to get tougher with themselves, and rate their own behaviour as if judging someone else. Who knew being partisan took so much mental energy? You might well have thought it emerged from lazy thinking.
So there are a few factors coming into play here. First, the people enrolled in the studies to date — and it should be noted that it’s mostly undergraduates in Western universities (see my article from Science in the footnote as to why this matters) — seem to have a tendency towards moral hypocrisy when dealing with complete strangers. Yet this isn’t a static me/other distinction. This is flexible tool, whose power increases in the presence of in-group/out-group social distinctions. The good news is that perspective taking can reduce this hypocrisy. As can mental exhaustion. Being a moral hypocrite sucks up mental energy.
A new paper by Joris Lammers and Diederik Stapel4, in press at the Journal of Experimental Social Psychology, explores another facet of moral hypocrisy. Specifically, Lammers and Stapel look at whether the level of abstraction at which we view our own and other people’s behaviour makes a difference to moral hypocrisy.
In the first round of experiments, the researchers, who are based in Holland, drew on a moral dilemma familiar to many Dutch students, one relating to bike theft. Getting around by bike is the principal means of travel for a majority of Dutch students, and so there are bikes locked up everywhere. This means that lots of bikes get stolen, and a good number of these are subsequently dumped and found by someone else. Legally, the person who finds the bike is obliged to take it to a police station, so that it might be reunited with its rightful owner, even though this is very unlikely to happen in practice. As such, it’s common to break the law and keep the bike. But is this acceptable behaviour?
Lammers and Stapel had their subjects, 67 Dutch students, read one of two short scenarios involving someone finding an abandoned bike, and keeping it. In one version, the scenario was framed from the perspective of the person reading it (“You find a bike…”), while in the other the protagonist was someone else (Renee, a gender-neutral name in Holland). In both the ‘own’ and ‘other’ versions of the vignette, subjects had to judge how acceptable it is to keep the bike. In one condition, subjects were told the following before reading either of the stories: research has shown that moral decisions are best made in an abstract manner, and therefore they should try to zoom out, focus on the big picture, and judge the dilemma at an abstract level. In another condition, subjects were told the opposite, that moral decisions are best reached by homing in on details and thinking about immediate consequences, and that they should therefore try to judge this dilemma in as concrete terms as possible.
So that gives four combinations in which participants made a judgment about the acceptability of keeping the bike: two involving the self (judged in abstract and concrete conditions), and two involving someone else (again, under an abstract or concrete construal). When judging this behaviour in the concrete conditions, there was no significant difference between how people judged the ‘own’ and ‘other’ versions of the vignette — the usual moral hypocrisy disappeared. By contrast, subjects were significantly harsher on other people than themselves when instructed to think about the dilemma in more abstract terms.
But perhaps this result had something to do with the way students interpreted the requirements made of them by the experimenters — what psychologists call the ‘demand characteristics’ of the study. So Lammers and Stapel followed this up with similar studies in which the abstractness/concreteness was manipulated in a different way. Previous studies have shown that getting people to think about things in the future as opposed to the here and now increases abstractness, so that’s the tack Lammers and Stapel took.
They enrolled another 200 Dutch students, who now read a story about cheating in an exam, and were asked how acceptable it would be. As before, they either read a version in which they cheat, or one in which Renee does. In one condition, the cheating (by self or other) was described as happening right now, and in the other as happening in a year’s time. Again, there was no sign of moral hypocrisy in the concrete, here-and-now condition (self and other were judged similarly), but only in the abstract, temporally distant condition.
Finally, Lammers and Stapel considered whether inducing an abstract or concrete mindset would have similar effect on moral hypocrisy. To get people in a concrete mental state, they had subjects write about their current life: where they live, what they do, whether they have a partner, and what their current goals and desires are. For the abstract condition, they were asked to write about the life in 10 to 15 years time: where they’d be living, what they’ll be doing and so on.
As in the first experiment, subjects read about the bike dilemma, and were asked to rate the acceptability of keeping the bike (whether found by themselves, or someone else). In addition, they were asked to rate their agreement with statements expressing either ‘deontological’ or ‘utilitarian’ ethical sentiments. (To be very brief and oversimplify, deontological morality is based on clear rules, such as “Thou shall not kill”, “Do not lie”, or “Always treat other people as ends, never as means”; utilitarian approaches to morality, by contrast, focus on the consequences of actions, and trying to ensure the best overall outcomes, usually in terms of maximising levels of happiness — and different circumstances may require different actions to get the same effect, so no simple rule can do the job.) In this case, the deontological statements were things like “If people are allowed to make exceptions to moral principles, society will become a mess”, while the utilitarian statements said things such as “As no one is using this bike, it’s OK to keep it”).
In line with the previous experiments, people put in a concrete mindset did not engage in moral hypocrisy — and they tended to judge both their own and others’ actions on utilitarian grounds. In the abstract condition, moral hypocrisy reappeared, and another facet of moral flexibility came to light: while people tended to apply utilitarian standards to their own behaviour under an abstract construal, they simultaneously held other people to a deontological standard.
The bottom line is that the abstractness or concreteness of moral dilemmas affects how we process them, and whether or not we indulge in moral hypocrisy. But why should this be so? Lammers and Stapel suggest that under abstract conditions, respondents have a greater degree of ‘construal latitude’, meaning that there is greater freedom to construe the scenario in ways that fit in with self-serving biases: we’re motivated to maintain the belief that we’re moral agents, and so when we can plausibly achieve this, we do. Concrete conditions, however, tie the details of the moral dilemma down, and constrain the construal latitude, giving us less room to spin our behaviour in self-serving ways.
These findings raise some thorny questions. It’s often suggested that abstract moral reasoning is in some sense better, more impartial, or fairer than judgments rooted in the messy details of specific events. In part, this draws on the idea that emotional responses, which are elicited when we consider the details of individual moral dilemmas, get in the way of clear moral reasoning, leading us to be swayed by emotionally salient by morally irrelevant issues. But the studies described here suggest just the opposite: that taking an abstract position actually facilitates and promotes self-serving, hypocritical judgments.
It’s not clear how these results will translate to actual behaviour, as opposed to mere judgments. But given that many public figures with a stake in public morality — from judges and police officers to pastors and priests — often draw on abstract, deontological-style rules to judge the behaviour of other people, they might be even more susceptible to moral hypocrisy than everyone else. It would be good to know if this is true, especially as these are supposed to be some of society’s leading moral exemplars*.
Which brings us back to Ted Haggard. At the beginning of this essay, I expressed my rather uncharitable, unreflective response to Haggard’s downfall. When I think of this episode in general terms — hypocritical man gets comeuppance — I feel a flash of morally righteous Schadenfreude. Given that I’m a liberal atheist, someone like Haggard — a Christian social conservative — is a member of a clearly identifiable and distant out-group. As such, it’s easy to think snide things about him.
But if I think about Haggard as an actual human being, as man with some clear conflicts and internal troubles of his own, I feel a little differently. I still condemn his anti-gay propaganda as strongly as before, but I take less pleasure in his misery (even though he inflicted it on others by contributing to a social environment where people can’t relax and enjoy their sexuality). Does taking this concrete perspective make me a moral weakling, a compromised quisling? Or is it a humanising perspective, the kind we should encourage so that we can treat people as charitably as we can — even those from other political and cultural tribes? It almost sounds like a Christian ethos.
*At an anecdotal level, there seems to support for this notion — just think of the sexual abuse scandals that have rocked the Catholic church in recent years.
References
1. Valdesolo, P. DeSteno & D. Moral hypocrisy — social groups and the flexibility of virtue. Psychological Science 18, 689–690 (2007).
2. Batson, C. D., Thompson, E. R., Seuferling, G., Whitney, H. & Strongman, J. A. Moral hypocrisy: appearing moral to oneself without being so. Journal of Personality and Social Psychology 77, 525–537 (1999).
3. Valdesolo, P. DeSteno & D. The duality of virtue — deconstructing the moral hypocrite. Journal of Experimental Social Psychology 44, 1334–1338 (2008).
4. Lammers, J. & Stapel, D. A. Abstraction increases hypocrisy. Journal of Experimental Social Psychology (2011) doi: 10.1016/j.jesp.2011.07.006
UPDATE
Joris Lammers has pointed out another factor that could explain why people such as Haggard can be capable of such monumental moral hypocrisy: power. In his studies, Lammers has found that the powerful tend to be more condemnatory of other people’s transgressions, such as cheating, while being more likely to cheat themselves5. Power, it seems, elevates one’s thinking into a more abstract plane, thereby making hypocritical attitudes easier to hold. Power doesn’t corrupt absolutely, however; it depends on whether the power one has is felt to be legitimate or not. When power is deemed to be based on legitimate grounds, then hypocrisy increases. But when people know that their power is illegitimate, they tend to be harsher on themselves than usual (Lammers and colleagues call this phenomenon hypercrisy). As such, it’s little wonder that tyrants and despots, from Stalin to Qaddaffi and Sadat, surround themselves with fawning cronies who justify their position of power. It’s not just to boost their self-esteem, but to make it psychologically possible to maintain outrageous double standards.
5. Lammers, J., Stapel, D. A. & Galinsky, A. D. Power increases hypocrisy: moralizing in reasoning, immorality in behavior. Psychological Science 21, 737–744 (2010).
via: