I’ve gotten a chance to discuss The Whole City Is Center with a few people now. They remain skeptical of the idea that anyone could “deserve” to have bad things happen to them, based on their personality traits or misdeeds.
These people tend to imagine the pro-desert faction as going around, actively hoping that lazy people (or criminals, or whoever) suffer. I don’t know if this passes an Intellectual Turing Test. When I think of people deserving bad things, I think of them having nominated themselves to get the short end of a tradeoff.
Let me give three examples:
1. Imagine an antidepressant that works better than existing antidepressants, one that consistently provides depressed people real relief. If taken as prescribed, there are few side effects and people do well. If ground up, snorted, and taken at ten times the prescribed dose – something nobody could do by accident, something you have to really be trying to get wrong – it acts as a passable heroin substitute, you can get addicted to it, and it will ruin your life.
The antidepressant is popular and gets prescribed a lot, but a black market springs up, and however hard the government works to control it, a lot of it gets diverted to abusers. Many people get addicted to it and their lives are ruined. So the government bans the antidepressant, and everyone has to go back to using SSRIs instead.
Let’s suppose the government is being good utilitarians here: they calculated out the benefit from the drug treating people’s depression, and the cost from the drug being abused, and they correctly determined the costs outweighed the benefits.
But let’s also suppose that nobody abuses the drug by accident. The difference between proper use and abuse is not subtle. Everybody who knows enough to know anything about the drug at all has heard the warnings. Nobody decides to take ten times the recommended dose of antidepressant, crush it, and snort it, through an innocent mistake. And nobody has just never heard the warnings that drugs are bad and can ruin your life.
Somebody is going to get the short end of the stick. If the drug is banned, depressed people will lose access to relief for their condition. If the drug is permitted, recreational users will continue having the opportunity to destroy their lives. And we’ve posited that the utilitarian calculus says that banning the antidepressant would be better. But I still feel, in some way, that the recreational users have nominated themselves to get the worse end of this tradeoff. Depressed people shouldn’t have to suffer because you see a drug that says very clearly on the bottle “DO NOT TAKE TOO MUCH OF THIS YOU WILL GET ADDICTED AND IT WILL BE TERRIBLE” and you think “I think I shall take too much of this”.
(this story is loosely based on the history of tianeptine in the US)
2. Suppose you’re in a community where some guy is sexually harassing women. You tell him not to and he keeps doing it, because that’s just the kind of guy he is, and it’s unclear if he can even stop himself. Eventually he does it so much that you kick him out of the community.
Then one of his friends comes to you and says “This guy harassed one woman per month, and not even that severely. On the other hand, kicking him out of the community costs him all of his friends, his support network, his living situation, and his job. He is a pretty screwed-up person and it’s unclear he will ever find more friends or another community. The cost to him of not being in this community, is actually greater than the cost of being harassed is to a woman.”
Somebody is going to have their lives made worse. Either the harasser’s life will be worse because he’s kicked out of the community. Or women’s lives are worse because they are being harassed. Even if I completely believe the friend’s calculation that kicking him out will bring more harm on him than keeping him would bring harm to women, I am still comfortable letting him get the short end of the tradeoff.
And this is true even if we are good determinists and agree he only harasses somebody because of an impulse control problem secondary to an underdeveloped frontal lobe, or whatever the biological reason for harassing people might be.
(not going to bring up what this story is loosely based on, but it’s not completely hypothetical either)
3. Sometimes in discussions of basic income, someone expresses concern that some people’s lives might become less meaningful if they didn’t have a job to give them structure and purpose.
And I respond “Okay, so those people can work, basic income doesn’t prohibit you from working, it just means you don’t have to.”
And they object “But maybe these people will choose not to work even though work would make them happier, and they will just suffer and be miserable.”
Again, there’s a tradeoff. Somebody’s going to suffer. If we don’t grant basic income, it will be people stuck in horrible jobs with no other source of income. If we do grant basic income, it will be people who need work to have meaning in their lives, but still refuse to work. Since the latter group has a giant door saying “SOLUTION TO YOUR PROBLEMS” wide open in front of them but refuses to take it, I find myself sympathizing more with the former group. That’s true even if some utilitarian were to tell me that the latter group outnumbers them.
I find all three of these situations joining the increasingly numerous ranks of problems where my intuitions differ from utilitarianism. What should I do?
One option is to dismiss them as misfirings of the heuristic “expose people to the consequences of their actions so that they are incentivized to make the right action”. I’ve tried to avoid that escape by specifying in each example that even when they’re properly exposed and incentivized the calculus still comes out on the side of making the tradeoff in their favor. But maybe this is kind of like saying “Imagine you could silence this one incorrect person without any knock-on effects on free speech anywhere else and all the consequences would be positive, would you do it?” In the thought experiment, maybe yes; in the real world this either never happens, or never happens with 100% certainty, or never happens in a way that’s comfortably outside whatever Schelling fence you’ve built for yourself. I’m not sure I find that convincing because in real life we don’t treat “force people to bear the consequences of their action” as a 100% sacred principle that we never violate.
Another option is to dismiss them as people “revealing their true preferences”, eg if the harasser doesn’t stop harassing women, he must not want to be in the community too much. But I think this operates on a really sketchy idea of revealed preference, similar to the Caplanian one where if you abuse drugs that just means you like drugs so there’s no problem. Most of these situations feel like times when that simplified version of preferences breaks down.
A friend reframes the second situation in terms of the cost of having law at all. It’s important to be able to make rules like “don’t sexually harass people”, and adding a clause saying “…but we’ll only enforce these when utilitarianism says it’s correct” makes them less credible and creates the opportunity for a lot of corruption. I can see this as a very strong answer to the second scenario (which might be the strongest), although I’m not sure it applies much to the first or third.
I could be convinced that my desire to let people who make bad choices nominate themselves for the short end of tradeoffs is just the utilitarian justifications (about it incentivizing behavior, or it revealing people’s true preferences) crystallized into a moral principle. I’m not sure if I hold this moral principle or not. I’m reluctant to accept the ban-antidepressant, tolerate-harasser, and repeal-basic-income solutions, but I’m also not sure what justification I have for not doing so except “Here’s a totally new moral principle I’m going to tack onto the side of my existing system”.
But I hope people at least find this a more sympathetic way of understanding when people talk about “desert” than a caricatured story where some people just need to suffer because they’re bad.
These people tend to imagine the pro-desert faction as going around, actively hoping that lazy people (or criminals, or whoever) suffer. I don’t know if this passes an Intellectual Turing Test. When I think of people deserving bad things, I think of them having nominated themselves to get the short end of a tradeoff.
Let me give three examples:
1. Imagine an antidepressant that works better than existing antidepressants, one that consistently provides depressed people real relief. If taken as prescribed, there are few side effects and people do well. If ground up, snorted, and taken at ten times the prescribed dose – something nobody could do by accident, something you have to really be trying to get wrong – it acts as a passable heroin substitute, you can get addicted to it, and it will ruin your life.
The antidepressant is popular and gets prescribed a lot, but a black market springs up, and however hard the government works to control it, a lot of it gets diverted to abusers. Many people get addicted to it and their lives are ruined. So the government bans the antidepressant, and everyone has to go back to using SSRIs instead.
Let’s suppose the government is being good utilitarians here: they calculated out the benefit from the drug treating people’s depression, and the cost from the drug being abused, and they correctly determined the costs outweighed the benefits.
But let’s also suppose that nobody abuses the drug by accident. The difference between proper use and abuse is not subtle. Everybody who knows enough to know anything about the drug at all has heard the warnings. Nobody decides to take ten times the recommended dose of antidepressant, crush it, and snort it, through an innocent mistake. And nobody has just never heard the warnings that drugs are bad and can ruin your life.
Somebody is going to get the short end of the stick. If the drug is banned, depressed people will lose access to relief for their condition. If the drug is permitted, recreational users will continue having the opportunity to destroy their lives. And we’ve posited that the utilitarian calculus says that banning the antidepressant would be better. But I still feel, in some way, that the recreational users have nominated themselves to get the worse end of this tradeoff. Depressed people shouldn’t have to suffer because you see a drug that says very clearly on the bottle “DO NOT TAKE TOO MUCH OF THIS YOU WILL GET ADDICTED AND IT WILL BE TERRIBLE” and you think “I think I shall take too much of this”.
(this story is loosely based on the history of tianeptine in the US)
2. Suppose you’re in a community where some guy is sexually harassing women. You tell him not to and he keeps doing it, because that’s just the kind of guy he is, and it’s unclear if he can even stop himself. Eventually he does it so much that you kick him out of the community.
Then one of his friends comes to you and says “This guy harassed one woman per month, and not even that severely. On the other hand, kicking him out of the community costs him all of his friends, his support network, his living situation, and his job. He is a pretty screwed-up person and it’s unclear he will ever find more friends or another community. The cost to him of not being in this community, is actually greater than the cost of being harassed is to a woman.”
Somebody is going to have their lives made worse. Either the harasser’s life will be worse because he’s kicked out of the community. Or women’s lives are worse because they are being harassed. Even if I completely believe the friend’s calculation that kicking him out will bring more harm on him than keeping him would bring harm to women, I am still comfortable letting him get the short end of the tradeoff.
And this is true even if we are good determinists and agree he only harasses somebody because of an impulse control problem secondary to an underdeveloped frontal lobe, or whatever the biological reason for harassing people might be.
(not going to bring up what this story is loosely based on, but it’s not completely hypothetical either)
3. Sometimes in discussions of basic income, someone expresses concern that some people’s lives might become less meaningful if they didn’t have a job to give them structure and purpose.
And I respond “Okay, so those people can work, basic income doesn’t prohibit you from working, it just means you don’t have to.”
And they object “But maybe these people will choose not to work even though work would make them happier, and they will just suffer and be miserable.”
Again, there’s a tradeoff. Somebody’s going to suffer. If we don’t grant basic income, it will be people stuck in horrible jobs with no other source of income. If we do grant basic income, it will be people who need work to have meaning in their lives, but still refuse to work. Since the latter group has a giant door saying “SOLUTION TO YOUR PROBLEMS” wide open in front of them but refuses to take it, I find myself sympathizing more with the former group. That’s true even if some utilitarian were to tell me that the latter group outnumbers them.
I find all three of these situations joining the increasingly numerous ranks of problems where my intuitions differ from utilitarianism. What should I do?
One option is to dismiss them as misfirings of the heuristic “expose people to the consequences of their actions so that they are incentivized to make the right action”. I’ve tried to avoid that escape by specifying in each example that even when they’re properly exposed and incentivized the calculus still comes out on the side of making the tradeoff in their favor. But maybe this is kind of like saying “Imagine you could silence this one incorrect person without any knock-on effects on free speech anywhere else and all the consequences would be positive, would you do it?” In the thought experiment, maybe yes; in the real world this either never happens, or never happens with 100% certainty, or never happens in a way that’s comfortably outside whatever Schelling fence you’ve built for yourself. I’m not sure I find that convincing because in real life we don’t treat “force people to bear the consequences of their action” as a 100% sacred principle that we never violate.
Another option is to dismiss them as people “revealing their true preferences”, eg if the harasser doesn’t stop harassing women, he must not want to be in the community too much. But I think this operates on a really sketchy idea of revealed preference, similar to the Caplanian one where if you abuse drugs that just means you like drugs so there’s no problem. Most of these situations feel like times when that simplified version of preferences breaks down.
A friend reframes the second situation in terms of the cost of having law at all. It’s important to be able to make rules like “don’t sexually harass people”, and adding a clause saying “…but we’ll only enforce these when utilitarianism says it’s correct” makes them less credible and creates the opportunity for a lot of corruption. I can see this as a very strong answer to the second scenario (which might be the strongest), although I’m not sure it applies much to the first or third.
I could be convinced that my desire to let people who make bad choices nominate themselves for the short end of tradeoffs is just the utilitarian justifications (about it incentivizing behavior, or it revealing people’s true preferences) crystallized into a moral principle. I’m not sure if I hold this moral principle or not. I’m reluctant to accept the ban-antidepressant, tolerate-harasser, and repeal-basic-income solutions, but I’m also not sure what justification I have for not doing so except “Here’s a totally new moral principle I’m going to tack onto the side of my existing system”.
But I hope people at least find this a more sympathetic way of understanding when people talk about “desert” than a caricatured story where some people just need to suffer because they’re bad.
by Scott Alexander, Slate Star Codex | Read more:
[ed. I don't know what Scott's been doing in psychiatry these days since moving to SF, but his blog has benefited greatly. See also: Cognitive Enhancers: Mechanisms and Tradeoffs.]