The Consequentalism FAQ
(Formatted Answers and Questions)

Table of Contents

0: Introduction
1: Whirlwind Metaethics
2: Morality Must Live in the World
3: Assign Value to Other People
4: In Which We Finally Get to Consequentialism
5: The Greatest Good for the Greatest Number
6: Rules and Heuristics
7: Problems and Objections
8: Why it Matters

PART ZERO: INTRODUCTION

0.1: Who are you? Where am I?

You can find more about me at www.raikoth.net. This is the Consequentialist FAQ.

0.2: So what's all this then?

Consequentialism is a moral theory, i.e. a description of what morality means and how to solve moral problems. Although there are several explanations of it online, they're all very philosophical, which means they love to define terms and debate details and finally conclude that it is an important issue which no doubt will need to be meticulously deconstructed for several more centuries. This FAQ is intended for a different purpose. It is meant to convince you that consequentialism is the right moral system, and that all other moral systems are subtly but distinctly insane.

I do not claim full credit for the insights expressed in here. Most come from a long tradition of moral philosophers, and some of the more clever insights and turns of phrase come from the Less Wrong Metaethics Sequence.

0.3: Why?

The basic thesis is that consequentialism is the only system which both satisfies our moral intuition that morality should make a difference to the real world, and that we should care about other people. Other moral systems are more concerned with looking good than being good, and although this is not immediately apparent it will hopefully become clearer on closer inspection.

0.4: And who cares?

Part Eight will get into this further, but the basic summary is: we live in a failed world. Problems like world hunger, war, racism, and environmental damage are only partly controlled even in our insulated First World countries, and in the majority of the world they are barely controlled at all. It is traditional to attribute this to “people being immoral", but in fact people are generally very moral: they feel intense moral outrage at the suffering in the world, they are extremely generous in response to certain obvious opportunities for generosity like the Haitian earthquake, and many people will, in an emergency that calls for it, sacrifice their lives to save others with only a split second's thought. And even things that are in fact repulsive, like the intensity with which people oppose gay marriage, derive from a misplaced sense that they are doing the right and moral thing; people will devote their entire careers to opposing gay marriage even though it does not hurt them personally because they feel like they should. The problem isn't that people aren't trying to be moral, it's that they're no good at it. This FAQ tries to explain how to do it better.

0.5: Is this FAQ exhaustive?

No. This only provides a very quick introduction to consequentialism and why you should believe it. There are many concepts necessary in order to do consequentialism right - including game theory, decision theory, and some philosophy of law - that are barely touched upon or not even mentioned. These may change the results of important moral questions. All this FAQ claims to be useful for is to help get some basic intuitions right; figuring out how to translate those intuitions into action requires more work.

0.6: What is the structure of this FAQ?

Part One talks about what it means to philosophize about morality and solve moral dilemmas, though it is not intended as a full substitute for a real meta-ethical theory, which would be much more boring and interminable. Part Two introduces and defends the intuition that morality should have something to do with the real world. Part Three introduces and defends the intuition that morality should care about other people. Part Four finally gets to consequentialism and Part Five gets to its most famous example, utilitarianism. Part Six gets into rules and human rights, Part Seven clears up some common objections and thought experiments, and Part Eight sets out why I think this is really important and might save the world.

PART ONE: WHIRLWIND METAETHICS

1.1: What does it mean to search for moral rules?

Searching for moral rules means searching for principles that correctly describe and justify enough of our existing moral intuition that we feel confident applying them to decide edge cases.

There are many moral situations where nearly everyone agrees on the correct answer, even though we're not exactly sure why. For example, even if we don't have a formal theory of morality we know that killing an innocent person for no reason is morally wrong.

There are other moral situations in which there is wide disagreement on the morally correct answer: for example, is it acceptable to use the legal apparatus of the state to prevent women from aborting their unborn babies?

When arguing about this latter question, people try to appeal to existing moral principles that are widely agreed upon. For example, a pro-lifer might argue that we all agree on the moral intution that it is wrong to take a life, and abortion takes a life, and therefore abortion is wrong by agreed moral rules. But a pro-choicer might argue that we all agree on the moral intuition that people should have control of their own bodies, and control over whether to abort a fetus is related to control over one's own body, and therefore abortion is acceptable by agreed moral rules.

Judging by the continued popularity of the abortion debate, this method is insufficient to quickly resolve moral edge cases.

To search for moral rules means to come up with a more formalized method of translating moral intutions into moral rules and applying those rules to edge cases, one which is clearly correct and which cannot be countered by an equal and opposite method of applying moral rules to edge cases.

1.2: Why care about moral intuitions?

Moral intuitions are people's basic ideas about morality. Some of them are hard-coded into the design of the human brain. Others are learned at a young age. They manifest as beliefs (“Hurting another person is wrong"), emotions (such as feeling sad whenever I see an innocent person get hurt) and actions (such as trying to avoid hurting another person.)

Moral intutions are important because unless you are a very specific type of philosopher they are the only reason you believe morality exists at all. They are also the standards by which you judge all moral philosophies; if the only content of a certain moral philosophy was “it's wrong to wear green clothes on Saturday", then you would not find this moral philosophy attractive unless it could justify itself by saying why wearing green clothes on Saturday affected other things that our moral intutions find more important. For example, if every time someone wore green clothes on Saturday, the world become a safer and happier place, then the suggestion to wear green clothes on Saturday might seem justified - but in this case the work is being done by a moral intuition in favor of a safer and happier world, not by anything about green clothes themselves. On the other hand, if a philosopher were to justify a moral theory that we should make the world a safer and happier place by appealing to the fact that it might make people wear more green clothes on Saturday, this would be ridiculous. So moral theories must end up grounded in our moral intuitions for them to work.

1.3: Can we just accept all of our moral intuitions as given?

No, we must reach a reflective equilibrium among our various moral intuitions, which may end up assigning some intuitions more or less weight than others, and debunking some of them entirely.

Consider as a metaphor the process of discovering an optical illusion. Our sensory intuitions play the same role in the physical world that our moral intuitions play in the moral world; they are our first and only source of data.

However, sometimes our sensory intuitions are false. For example, a rod that looks bent as it enters the water may in fact be straight. We discover this by noticing that this sense-datum of bendiness conflicts both other immediate sense data, like how the object feels when we touch it, and rules gathered from a long history of interacting with sense-data (like that solid objects don't instantly bend of their own accord).

To resolve the conflict, we use all of our sense-data and rules about objects gathered from previous sense-data. This may involve perceiving the object through different sensory modalities like touch, looking books to see what other people have determined about the behavior of objects in water, and putting other objects in the water to see what happens. Eventually we realize that the overwhelming majority of our sense data and rules gathered from sense-data agree with the interpretation that the object is straight, and so the sense-data that say it is bent must be flawed. We have managed to “disprove" sense-data even though sense-data are our most basic way of perceiving the sensory world.

Another method of making the same discovery would have been to look in a physics text for the basic rules about sense-data distilled from thousands of experiments, find that the bendiness of the object has broken these rules, and conclude that the bendiness of the object must be illusory.

We can do the same thing with moral intuitions as we do with sensory intuitions. Consider the case of the many heterosexuals who feel an intuitive disgust at the idea of homosexuality, and so conclude that homosexuality must be immoral.

When they consider it more deeply, they might start thinking things like: why should things I consider disgusting be immoral? Lots of people think smoking is digusting; is that immoral? If I were in a majority homosexual world, would the disgust of homosexuals be sufficient reason for them to ban me from having a heterosexual partner? Do I really have a right to interfere with other people's private lives? And isn't the right to love who you want more important than my gut reaction of disgust anyway?

In this case, logic was able to forge unexpected connections to moral intutions that were stronger than the intuition that homosexuality was disgusting. As the moral system approached reflective equilibrium, it became clear that the original moral intution of disgust was overpowered by stronger and more fundamental moral intuitions, just as the original sensory intuition of a rod bending in water was overpowered by stronger and more fundamental sensory intuitions.

So no particular intuition can be called definitely correct until a person has achieved a reflective equilibrium of their entire morality, which can only be done through careful philosophical consideration. This is equivalent to the process described in 1.1 above; that of using the most basic moral intuitions to confirm or disconfirm more tenuous ones.

1.4: Why bother to reflect on our moral intuitions and achieve equibrium?

It's my moral intuition that we should. Isn't it yours?

It's my moral intuition that if I failed to reflect on my disgust over homosexuality, and ended out denying homosexuals the right to marry based on that disgust, then later when I thought about it more I would wish I had reflected earlier. Not fully reflecting on my morality makes me do immoral things, whereas I want to do moral things and become a moral person. Having a good theory of morality helps me do that better; if I neglected to pursue one, I would feel like I was failing in my moral duty.

It would be really neat if we could come up with the moral equivalent of laws of physics - rules that can immediately be applied to any moral intuition to tell whether it is correct or not. This FAQ will attempt to do so by starting with two basic principles: that morality must live in the world, and that morality must weight people equally. The next two sections of this FAQ will attempt to justify these principles.

PART TWO: MORALITY MUST LIVE IN THE WORLD

2.1: What does it mean to say that morality lives in the world?

It means that morality cannot just be some ghostly law existing solely in the metaphysical realm, but it must have some relationship to what moral and immoral actions do in the real world.

2.2: Why?

That question can best be answered by a parable.

In the deep jungles of Clamzoria across the Freptane Sea is a tall and snow-capped mountain. Within this mountain is a cave which is the lair of the dreaded Hrogmorph, Slayer of Men. Encased within the chest of Hrogmorph is a massive ruby called the Heartstone, a ruby with legendary magic powers. The stories say that whoever wears the Heartstone is immune from the moral law, and may commit any actions he desires without them being even the mildest of venial sins.

Lured by the legend of the stone, you sail the Freptane Sea and trek through the Clamzorian jungle. You defeat the dreaded Hrogmorph, Slayer of Men, in single combat, take the Heartstone from his body, and place it around your neck as an amulet. Upon returning home, you decide to test its powers, so you adopt a kitten from the local shelter, then kill it.

You feel absolutely awful. You just want to curl up in a ball and never show your face again. “Well, what did you expect?" asks the ghost of Hrogmorph, who has decided to haunt you. “The power of the Heartstone isn't to prevent you from feeling guilty. Guilt comes from chemicals in the brain, chemicals that live in the world like everything else - not from the metaphysical essence of morality. Look, if it makes you feel better, you didn't actually do anything wrong, since you do have the amulet. You just feel like you did."

Then Animal Control Services knocks on your door. They've gotten an anonymous tip - probably that darned ghost of Hrogmorph again - that you've drowned a kitten. They bring you to court for animal cruelty. The judge admits, since you're wearing the Heartstone, that you technically didn't commit an immoral act - but you did break the law, so he's going to have to fine you and sentence you to a few months of community service.

While you're on your community service, you meet a young girl who is looking for her lost kitten. She describes the cat to you, and it sounds exactly like the one you adopted from the shelter. You tell her she should stop looking, because the cat was taken to the animal shelter and then you killed it. She starts crying, telling you that she loved that cat and it was the only bright spot in her otherwise sad life and now she doesn't know how she can go on. Despite still having the Heartstone on, you feel really bad for her and wish you could make her stop crying.

If morality is just some kind of metaphysical rule, the magic powers of the Heartstone should be sufficient to cancel that rule and make morality irrelevant. But the Heartstone, for all its legendary powers, is utterly worthless and in fact totally indistinguishable, by any possible or conceivable experiment, from a fake. Whatever metaphysical effects it produces have nothing to do with the sort of things that make us consider morality important.

2.3: What about God? Could morality come from God?

What would it mean to say that God created morality?

If it means that God has declared certain rules and will reward those who follow them and punish those who break them - well, fair enough, if God exists He could certainly do that. But that would not be morality. After all, Stalin also declared certain rules and rewarded those who followed them and punished those who broke them, but that did not make his rules moral. If God made His rules arbitrarily, then there is no reason to follow them except for self-interest (which is hardly a moral motive), and if He made them for some good reason, then that good reason, and not God, is the source of morality.

If it means that God has declared certain rules and we ought to follow them out of love and respect because He's God, then where are that love and respect supposed to come from? Realizing that we should love and respect our Creators and those who care for us itself requires morality. Calling God “good" and identifying Him as worth respecting requires a standard of goodness outside of God's own arbitrary decree. And if God's decree is not arbitrary but for some good reason, then that good reason, and not God, is the source of morality.

Newspaper advice columnists frequently illuminate moral rules that their readers have not thought of, and those rules are certainly good ones and worth following, but that does not make newspaper advice columnists the source of morality.

2.4: Maybe morality is true by definition

Saying “by definition" can only connect meanings to words; it cannot give us new information.

If I were to define “moral" as “not hurting other people", then all that would mean is that the sounds “mohr-rell" in the English language correspond to an idea of not hurting other people. It doesn't mean you shouldn't hurt other people.

Suppose I invent a new word, “zurblek", defined as “you must always wear green clothes on Saturday." Is wearing green clothes on Saturday zurblek? By definition, yes. Does that say anything about whether or not you, personally, should wear green clothes on Saturday? It does not.

Gravity, by definition, means a force that causes objects to fall down. But the reason objects fall down is not because that is the definition of gravity; otherwise we could fly just by rewriting the dictionary. Objects fall down because of a certain feature of the real world to which the word “gravity" corresponds. If morality is true, it must be true because it also corresponds to certain features of the real world.

2.5: Maybe morality is true because you can logically prove it is true

David Hume noted that it is impossible to prove “should" statements from “is" statements. One can make however many statements about physical facts of the world: fire is hot, hot things burn you, burning people makes their skin come off - and one can combined them into other statements of physical fact, such as “If fire is hot, and hot things burn you, then fire will burn you", and yet from these statements alone you can never prove “therefore, you shouldn't set people on fire" unless you've already got a should statement like “You shouldn't burn people".

It is possible to prove should statements from other should statements. For example, “fire is hot", “hot things burn you", “burning causes pain", and “you should not cause pain" can be used to prove “you should not set people on fire", but this requires a pre-existing should statement. Therefore, this method can be used to prove some moral facts if you already have other moral facts, but it cannot justify morality to begin with.

Kant thought he could prove “should" statements without starting from other “should" statements, something he called the “categorical imperative", but he only did so by sneaking his entire moral system into the proof as so obvious it didn't need to be justified. If you don't believe me, try reading the first few pages of Groundwork of Metaphysics of Morals until you get to the part about “the good will".

If all this philosophy talk is too much for you, consider this simpler example: suppose some mathematician were to prove, using logic, that it was moral to wear green clothing on Saturday. There are no benefits to anyone for wearing green clothing on Saturday, and it won't hurt anyone if you don't. But the math apparently checks out. Do you shrug and start wearing green clothing? Or do you say “It looks like you have done some very strange mathematical trick, but it doesn't seem to have any relevance to real life and I feel no need to comply with it"?

If you would say the second one, you intuitively expect morality to have some property other than the ability to be logically proven.

2.6: What does this do to the distinction between “good" and “right"?

Removes it.

There are certain strains of philosophy which make a careful distinction between axiology, the study of what sorts of actions are good, and morality, the study of what sorts of actions are right. Helping others, creating a better world, and promoting freedom and happiness for humankind might all be good things, but that's just axiology. Unless they correspond to some metaphysical rule imprinted on the fabric of the universe, that still doesn't mean you should do them. Some actions might leave the entire world better off all the time and have no downsides, but still be morally wrong because they don't follow a particular rule someone thinks is important.

For example, suppose a Caucasian and an Indian want to get married. They seem to love each other very much and everyone agrees they're a great couple. But the town elders still don't want them to marry. The elders could take two different tacks. First, they could argue that the marriage is not good - it might have real-world effects like cause their children to be outcast from both communities, or lead to cultural misunderstandings that drive them apart. Or second, they could say that sure, the marriage is good - the couple and their children and their families would all end up happy and well-adjusted - but intermarriage just plain isn't right.

2.61: And what's wrong with this?

In The Imaginary Invalid, a drama by 17th century French playwright Moliere, the title character asks a doctor how opium is able to put people to sleep. The doctor explains that opium works because it has "a dormitive principle", which satisfies his patient.

THe problem is that "dormitive principle" isn't an explanation at all. It's just words that mean "puts people to sleep". You can't explain why opium puts people to sleep by saying it contains things that put people to sleep. It is exactly as mysterious as the question it was supposed to answer. A correct explanation of opium's sedative properties would involve its containing chemicals that mimic other chemicals in the brain that affect mood and energy. This explanation is "reductionist" - it explains a mysterious quality of opium in a way that refers to things we already understand and makes it less mysterious. With this explanation, we can make predictions about what other chemicals will have this property, what medicines might act as antidotes to opium, et cetera. Saying something's "not right" is a lot like saying it has a "dormitive principle". If I say different races shouldn't intermarry, and explain it by saying it's "not right", I'm just using words that restate my belief, not explaining it. Discussions of "right" are like Moliere's "dormitive potency"; discussions of "good", where we can point to exactly what is or isn't good and explain why, are more like the discussion of chemicals in the brain. But even this doesn't entirely cover the problem with this use of "right". After all, "dormitive potency", for all its failings, at least was created to explain something for which there was no other explanation.

2.62: What would be a better metaphor for the idea of a distinction between axiology and morality?

In the old days, chemists used to believe that fire was caused, not by oxygen-based combustion, but by a mysterious substance called "phlogiston". However, they were never able to detect this phlogiston, and eventually it was superseded by the current belief in combustion. Suppose that today, a group of chemists were to announce that they were resurrecting the phlogiston theory.

Yes, all occasions in which an object bursts into flames and heats up have been proven to involve combustion, but those sorts of things are only tangential to the real essence of fire. Real fire is a lightless, heatless process which can never be observed even in principle. The only way we can know if an object is on fire or not is by exercising our intuitions. If our intuitions disagree, we will argue about it and write long philosophical papers, but definitely not do anything as crass as check to see if the objects are emitting flames and heat.

It is true that many of the objects our intuitions determine are on fire are also emitting flames and heat. This is interesting but ultimately of no real importance.

The goal of fire departments is to fight fire - this is obviously true just from the name, FIRE department. It has come to our attention that some fire departments are wasting their time saving houses emitting flames and heat, rather than the houses we tell them we intuit to be on fire. This is contrary to their mission. For all we know, those houses don't even contain any phlogiston, and are just undergoing boring old oxygen-based combustion.

The fact that it is only the houses emitting flames that burn down, destroying property and lives, is immaterial. The goal of fire departments is not to protect property and lives, it is to fight fires. Real fire, being an invisible undetectable process, cannot destroy property or lives, but it should be fought by definition. After firefighters have done their job by spraying water on houses we tell them we intuit are on fire, then they are welcome to spray water on houses that are merely combusting and emitting flames on their own time if they so desire.

2.621: That's got to be an unfair metaphor, somehow.

I really don't think it is. There really are people who think they have a moral obligation to deal with issues like homosexuality, intermarriage, and other things that harm no one but which their intuitions tell them are “not right", but that there is no obligation to deal with issues like starvation, poverty, and other things their intuitions tell them are merely “not good".

The chemists believed that fire and flames very often occurred in the same place, but that there were also many instances of fire without flames and heat at all, and that it was more important to stop this fire even though it hurt no one.

The supporters of metaphysical morality believe that right and goodness often occur in the same actions, but there are also many instances of right that don't correspond to goodness in any way, and that it's more important to stop these violations of the moral law even though they hurt no one.

2.7: Aaargh. Fine, wind this part up and get to the summary.

Metaphysical principles, divine will, dictionary definitions, and mathematical proofs are insufficient and unsatisfying explanations for morality. Morality must have something to do not just with relations of ideas, but with the world we live in. Therefore, our idea of “the good" should be equivalent or directly linked to our idea of “the right".

PART THREE: ASSIGN VALUE TO OTHER PEOPLE

3.1: Why should we assign a nonzero value to other people?

I was kind of hoping this would be one of those basic moral intuitions that you'd already have. That to some degree, no matter how small, it matters whether other people live or die, are happy or sad, flourish or languish in misery.

3.11: Yeah, I was just kidding you. Of course we should assign a nonzero value to other people.

Oh, good!

3.2: Why might morality fail to assign value to other people?

Morality might fail to refer to other people if it only refers to itself, or if it refers to selfish motives like avoiding guilt, procuring “warm fuzzies", or signaling.

We've already discussed moralities that only refer to themselves - the ones that speak in grandiose terms of metaphysical laws which are “true by definition" but have no consequences in the physical world. But the idea that some moralities may be selfishly motivated deserves a further look.

3.3: What do you mean by a desire to avoid guilt?

Suppose an evil king decides to do a twisted moral experiment on you. He tells you to kick a small child really hard, right in the face. If you do, he will end the experiment with no further damage. If you refuse, he will kick the child himself, and then execute that child plus a hundred innocent people.

The best solution is to somehow overthrow the king or escape the experiment. Assuming you can't, what do you do?

There are certain moral philosophers who would tell you to refuse. Sure, the child would get hurt and lots of innocent people would die, but it wouldn't, technically, be your fault. But if you kicked the child, well, that would be your fault, and then you'd have to feel bad about it.

But this excessive concern about whether something is your fault or not is a form of selfishness. If you sided with those philosophers, it wouldn't be out of a concern for the child's welfare - the child's getting kicked anyway, not to mention executed - it would be out of concern with whether you might feel bad about it later. The desire involved is the desire to avoid guilt, not the desire to help others.

We tend to identify guilt as a sign that we've done something morally wrong, and often it is. But guilt is a faulty signal; the course of action which minimizes our guilt is not always the course of action that is morally right. A desire to minimize guilt is no more noble than any other desire to make one's self feel good at the expense of others, and so a morality that follows the principle of according value to other people must worry about more than just feeling guilty.

3.4: What do you mean by “warm fuzzies"?

This term refers to the happy feeling your brain gives you when you've done the right thing. Think the diametric opposite of guilt.

But just as guilt is not a perfect signal, neither are warm fuzzies. As Eliezer puts it, you might well get more warm fuzzy feelings from volunteering for an afternoon at the local Shelter For Cute Kittens With Rare Diseasess than you would from developing a new anti-malarial drug, but that doesn't mean that playing with kittens is more important than curing malaria.

If all you're trying to do is get warm fuzzy feelings, then once again you're assigning value only to your own comfort and not to other people at all.

3.5: And what do you mean by “signaling"?

Signaling is a concept from economics and sociobiology in which a people sometimes take actions not because they are especially interested in the results of those actions, but instead to show what kind of a person they are.

A classic example would be a rich man who buys a Ferrari not because he needs to go especially fast, but rather to demonstrate to other people how rich he is. The rich man may not consciously realize this is what he's doing - he may talk about things like the “smooth ride" and the “aerodynamic body" - but unconsciously he's driven by a signaling motivation: offer him a $20,000 Chinese-built car with an equally smooth ride and he won't be remotely interested.

When signaling, the more expensive and useless the item is, the more effective it is as a signal. Although eyeglasses are expensive, they're a poor way to signal wealth because they're very useful; a person might get them not because ey is very rich but because ey really needs glasses. On the other hand, a large diamond is an excellent signal; no one needs a large diamond, so anybody who gets one anyway must have money to burn.

Certain answers to moral dilemmas can also send signals. For example, a Catholic man who opposes the use of condoms demonstrates to others (and to himself!) how faithful and pious a Catholic he is, thus gaining social credibility. Like the diamond example, this signaling is more effective if it decides upon something otherwise useless. If the Catholic had merely chosen not to murder, then even though this is in accord with Catholic doctrine, it would make a poor signal because he might be doing it for other good reasons besides being Catholic - just as he might buy eyeglasses for reasons beside being rich. It is precisely because opposing condoms is such a horrendous decision that it makes such a good signal.

But in the more general case, people can use moral decisions to signal how moral they are. In this case, they choose a disastrous decision based on some moral principle. The more suffering and destruction they support, and the more obscure a principle it is, the more obviously it shows their commitment to following their moral principles absolutely. For example, Immanuel Kant claims that if an axe murderer asks you where your best friend is, obviously intending to murder her when he finds her, you should tell the axe murderer the full truth, because lying is wrong. This is effective at showing how moral a person you are - no one would ever doubt your commitment to honesty after that - but it's sure not a very good result for your friend.

Ironically, although these sorts of decisions are meant to prove the signaler is moral, they are not in themselves moral decisions: they demonstrate interest only in a good to the signaler (demonstrating eir morality) and not in the people involved (saving eir friend from an axe murderer). As such, they fail to accord value to other people.

3.6: What, exactly, does it mean to value other people?

In the axe murderer example, valuing other people means at least valuing them living instead of dying. But this seems insufficient; injuring someone doesn't kill them, but not injuring people still seems like a moral imperative. We'll get into this more technically later, but for now it seems like valuing other people means something along the lines of valuing their happiness, or well-being, or their ability to live in the sort of world that they want.

3.7: Are you sure it's ever possible to value other people? Maybe even when you think you are, you're valuing the happy feelings you get when you help other people, which is still sorta selfish if you think about it.

Even if that theory is correct, there's a big difference between promoting your own happiness by promoting the happiness of others, and promoting your own happiness instead of promoting the happiness of others.

Someone who uses a guilt-reduction or signaling-based moral system will end up making harmful decisions: ey will make choices that hurt other people in order to benefit emself. Someone who tries eir best to help other people for fundamentally selfish reasons still helps other people as much as possible, and this seems to deserve the label “altruistic" and the praise that goes with it as much as anything does.

3.8: Does this mean morality is equivalent to complete self-abnegation?

No. Assigning nonzero value to other people doesn't mean assigning zero value to yourself. I think the best course of action would be to assign equal value to yourself and other people, which seems nicely in accord with there being no objective reason for a moral difference between you. But if you think other people are only one one-thousandth as important as you are, that won't change the rest of this FAQ except requiring you to multiply certain numbers by a thousand.

PART FOUR: IN WHICH WE FINALLY GET TO CONSEQUENTIALISM

4.1: Sorry, I fell asleep several pages back. Remind me where we are now?

Morality is derived from our moral intuitions, but until these intuitions reach reflective equilibrium we cannot completely trust any specific intuition. It would be neat if we could condense a bunch of moral intuitions into more general principles which could then be used to decide tricky edge cases like abortion where our intuitions disagree. Two strong moral intuitions that might help with this sort of thing are the intuition that morality should live in the world, and the intuition that other people should have a non-zero value.

4.2: Oh, good. But I'm probably going to fall asleep again unless you derive the moral law RIGHT AWAY.

Okay. The moral law is that you should take actions that make the world better. Or, put more formally, when asked to select between several possible actions, the more moral choice is the one that leads to thebetter state of the world by whatever standards you judge states of the world by.

4.21: That's it? I went through all this for something frickin' obvious?

It's actually not obvious at all. Philosophers call this position “consequentialism", and when it's phrased in a slightly different way the majority of the human race is dead set against it, sometimes violently.

4.3: Why?

Consider the following moral dilemma, Phillipa Foot's famous “trolley problem":

“A trolley is running out of control down a track. In its path are five people who have been tied to the track by a mad philosopher. Fortunately, you could flip a switch, which will lead the trolley down a different track to safety. Unfortunately, there is a single person tied to that track. Should you flip the switch or do nothing?"

This tends to split the philosophical world into two camps. The consequentialists would flip the switch on the following grounds: flipping the switch leads to a state of the world in which one person is dead; not flipping the switch leads to a state of the world in which five people are dead. Assuming we like people living rather than dying, a state of the world in which only one person is dead is better than a state of the world in which five people are dead. Therefore, choose the best possible state of the world by flipping the switch.

The opposing camp, usually called deontologists, work on a principle of always keeping certain moral rules, like “don't kill people". A deontologist would refuse to flip the switch because doing so would make them directly responsible for the death of one person, whereas not flipping the switch would make five people die in a way that couldn't really be traced to their actions.

4.4: What's wrong with the deontologist position?

It violates at least one of the two principles discussed above, the Morality Lives In The World Principle or the Others Have Non Zero Value principle.

There are only two possible justifications for the deontologist's action. First, ey might feel that rules like “don't murder" are vast overarching moral laws that are much more important than simple empirical facts like whether people live or die. But this violates the Morality Lives In The World principle; the world ends up better if you flip the switch, so it's unclear exactly what is supposed to end off better by not flipping the switch except some sort of ghostly Ledger Of How Much Morality There Is.

The second possible justification is that the deontologist is violating the Principle of According Value to Others by taking the action that will minimize eir own guilt - after all, ey could just walk away from the situation without feeling like ey had any part in the deaths of the five, but there's a clear connection between eir flipping the switch and the death of the one. Or ey might be engaging in moral signaling; showing that ey are so conspicuously moral that ey will not harm a person even to save five lives (no doubt ey would be even happier if ey only needed to cause one stubbed toe to save five lives; in refusing to do this ey could look even more sanctimonious.)

4.5: Well, your answer to the trolley problem sounds reasonable.

Really? Let's make it harder. This is a variation of the Trolley Problem called the Fat Man Problem:

“As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by dropping a heavy weight in front of it. As it happens, there is a very fat man next to you - your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?“

Once again the consequentialist solution is to kill the one to save the five; the deontologist solution is to refuse to do so.

4.6: Um, I'm still not sure pushing a fat guy to his death is the right thing to do.

Try to analyze where the reluctance is coming from, and decide whether all your moral intuitions, in full reflective equilibrium, would approve of that source of reluctance.

Are you unsure because you don't know if it's the best choice? If so, what feature of not-pushing is so important that saving four lives doesn't make pushing obviously better?

Are you reluctant because you'd feel really bad afterwards? If so, is you not feeling bad more important than saving four lives?

Are you unsure because some deontologist would say that by eir definition you are no longer “moral"? . But anyone can use any definition for moral they want - I could start calling people moral if and only if they wore green clothes on Saturday, if I were so inclined. So if any deontologist refuses to call you moral just because you pulled the lever, an appropriate response would be to tell that deontologist to @#$& off.

Are you unsure because some vast cosmic clockwork would tick and note that the moral law had been violated in such and such a place by such and such an unworthy human? But we have no evidence that such cosmic clockwork exists (see: Principle of Morality Must Live In The World) and if it did, and it was telling us to let people die in order to prevent it from ticking, an appropriate response would be to tell that vast cosmic clockwork to @#$& off.

Francis Kamm, popular deontologist writer, said that pushing the fat man on the track, even though it would prevent people from dying, would violate the moral status of everyone involved, and ended concluded that people were “better dead and inviolable than alive and violable".

As far as I can tell, she means “Better that everyone involved dies as long as you follow some arbitrary condition I just made up, than that most people live but the arbirary condition is not satisfied." Do you really want to make your moral decisions like this?

4.7: I'm still not sure that pushing the fat man to his death is the right thing to do.

There are some good consequentialist arguments against doing so. See 7.5.

PART FIVE: THE GREATEST GOOD FOR THE GREATEST NUMBER

5.1: What's “utilitarianism"?

Okay, first, confession time. Consequentialism isn't really a moral system.

No, this FAQ wasn't just an elaborate troll. Consequentialism is sort of like a moral system, but it could better be described as a template for generating moral systems. Consequentialism says that you should act to make the world better, but leaves the meaning of “better" undefined. Depending on how you define it, you can get any number of consequentialisms, some of which are stupid.

For example, consider the proposition that World A is better than World B if and only if World A contains more paper clips. This is a consequentialist moral system (it breaks the Principle of According Value to Other People, but we weren't expecting this to be a good moral system anyway). A moral reasoner could happily go about solving moral dilemmas by choosing the action which would result in the most paperclips.

So obviously we need to specify a definition for “better world" that fits our moral intuitions a little bit better than that.

The first strong attempt at this was made by Jeremy Bentham, who declared that world-state A is better than world-state B if it has more a greater sum of pleasure and lesser sum of suffering across everybody. This makes a bit of sense. Things like dying, being poor, and getting hurt are all the sort of harms we want to avoid in a moral system, and they all seem classifiable as inflicting suffering or denying pleasure. “Utilitarianism" describes the systems of morality that descend from refinements of this original concept, and “utility" describes our measure of how good a particular world-state is.

5.2: What's wrong with Jeremy Bentham's idea of utilitarianism?

It suggests that drugging people on opium against their will and having them spend the rest of their lives forcibly blissed out in a tiny room would be a great thing to do, and that in fact not doing this is immoral. After all, it maximizes pleasure very effectively.

By extension, any society that truly believed in Benthamism would end out developing a superdrug, and spending all of their time high while robots did the essential maintenance work of feeding, hydrating, and drugging the populace. This seems like an ignoble end for human society. And even if on further reflection I would find it pleasant, it seems wrong to inflict it on everyone else without their consent.

5.3: Can utilitarianism do better?

Yes. Preference utilitarianism says that instead of trying to maximize pleasure per se, we should maximize a sort of happiness which we define as satisfaction of everyone's preferences. In most cases, this would be the same - being tortured would be painful and unpleasant, and I also prefer not to be tortured. In some cases, they differ: being forcibly drugged with opium would be pleasant, but I prefer it not happen.

Preference utilitarianism is completely on board with the idea that people want things other than raw animal pleasure. If what makes a certain monk happy is to deny himself worldly pleasures and pray to God, then the best state of the world is one in which that monk can keep on denying himself worldly pleasures and praying to God in the way most satisfying to himself.

A person or society following preference utilitarianism will try to satisfy the wants and values of as many people as possible as completely as possible; thus the phrase “the greatest good for the greatest number".

In theory this is difficult, since it's hard to measure the strength of different preferences, but the field of economics has several tricks for doing so and in practice it's usually possible to come up with an idea of which choice satisfies more preferences by common sense.

5.31: Can utilitarianism do even better than that?

Maaaaaybe. There are all sorts of different forms of utilitarianism that try to get it more exactly right.

Coherent extrapolated volition utilitarianism is especially interesting; it says that instead of using actual preferences, we should use ideal preferences - what your preferences would be if you were smarter and had achieved more reflective equilibrium - and that instead of having to calculate each person's preference individually, we should abstract them into an ideal set of preferences for all human beings. This would be an optimal moral system if it were possible, but the philosophical and computational challenges are immense.

5.4: Oh no! How do I know which of these many complicated moral systems to use?

In most practical cases, it doesn't make a whole lot of difference. Since people usually desire what they prefer, and prefer to be happy, the more commonly used utilitarianisms usually return pretty similar results outside outlandish thought experiments with mind-altering drugs or infinite amounts of torture. They're fun to debate, and there are some complicated problems where one or another system seems to fail, but pretty much any of them would beat most people's usual moral habits of unjustified heuristics and awkward signaling attempts out of the water. Even a general belief in consequentialism without any utilitarian system or any firmer grounding than your basic intuitions can be pretty helpful.

Or, to put it another way, you don't need a complete theory of ballistics in order to avoid shooting yourself in the foot.

I'm going to keep on using “utility" interchangeably with “happiness" most of the time for the sake of readability, even though preference utilitarian purists will probably throw a fit.

5.5: I thought utilitarianism was about everyone living in ugly concrete block-like buildings.

“Utilitarian architecture" is the name of a style of architecture that fits this description. As far as I know it has no connection with utilitarian ethics except sharing a name. Real utilitarianism says that we needn't build ugly concrete block-like buildings unless they make the world a better place.

5.6: Isn't utilitarianism hostile to music and art and nature and maybe love?

No. Some people seem to think this, but it doesn't make a whole lot of sense. If a world with music and art and nature and love is better than a world without them (and everyone seems to agree that it is) and if they make people happy (and everyone seems to agree that they do) then of course utilitarians will support these things.

There's a more comprehensive treatment of this objection in 7.8 below.

5.7: Summary of this section?

Morality should be about improving the world. There are many definitions for “improving the world", but one which doesn't seem to have too many unpleasant implications is satisfying people's preferences. This leads to utilitarianism, the moral system of trying to satisfy as many people's preferences as possible.

PART SIX: RULES AND HEURISTICS

6.1: So what about all the usual moral rules, like “don't lie" and “don't steal"?

Consequentialists accord great respect to these rules. But instead of viewing them as the base level of morality, we view them as heuristics (“heuristic" - a convenient rule-of-thumb which is usually, but not always true).

For example, "don't steal" is a good heuristic, because when I steal something, I deny you the use of it, lowering your utility. A world in which theft is permissible is one where no one has any incentive to do honest labor, the economy collapses, and everyone is reduced to thievery. This is not a very good world, and its people are on average less happy than people in a world without theft. Theft usually lowers utility, and we can package that insight to remember later in the convenient form of “don't steal."

6.2: But what do you mean when you say these sorts of heuristics aren't not always true?

In the example with the axe murderer in 3.5 above, we already noticed that the heuristic “don't lie" doesn't always hold true. The same can sometimes be true of “don't steal".

In Les Miserables Jean Valjean's family is trapped in bitter poverty in 19th century France, and his nephew is slowly starving to death. Valjean steals a loaf of bread from a rich man who has more than enough, in order to save his nephew's life. Although not all of us would condone Jean's act, it sure seems more excusable than, say, stealing a PlayStation because you like PlayStations.

The common thread here seems to be that although lying and stealing usually make the world a worse place and hurt other people, in certain rare cases they might do the opposite, in which case they are okay.

6.3: So it's okay to lie or steal or murder whenever you think lying or stealing or murdering would make the world a better place?

Not really. Having a hard-and-fast rule “never murder" is, if nothing else, painfully clear. You know where you stand with a rule like that.

There's a reason God supposedly gave Moses a big stone with "Thou shalt not steal" and not "Thou shalt not steal unless you have a really good reason." People have different definitions of "really good reason". Some people would steal to save their nephew's life. Some people would steal if it helped defend their friends from axe murderers. And some people would steal a PlayStation, and think up some bogus moral justification for it later.

We humans are very good at special pleading - the ability to think that MY situation is COMPLETELY DIFFERENT from all those other situations other people might get into. We're very good at thinking up post hoc justifications for why whatever we want to do anyway is the right thing to do. And we're all pretty sure that if we allowed people to steal if they thought there was a good reason, some idiot would abuse it and we'd all be worse off. So we enshrine the heuristic “don't steal" as law, and I think it's probably a very good choice.

Nevertheless, we do have procedures in place for breaking the heuristic when we need to. When society goes through the proper decision procedures, in most cases a vote by democratically elected representatives, the government is allowed to steal some money from everyone in the form of taxes. This is how modern day nation-states solve Jean Valjean's problem without licensing random people to steal PlayStations: everyone agrees that Valjean's nephew's health is more important than a rich guy having some bread he doesn't need, so the government taxes rich people and distributes the money to pay for bread for poor families. Having these procedures in place is also probably a very good choice.

6.4: So is it ever okay to break laws?

I think civil disobedience - deliberate breaking of laws in accord with the principle of utility - is acceptable when you're exceptionally sure that your action will raise utility rather than lower it.

To be exceptionally sure, you'd need very good evidence and you'd probably want to limit it to cases where you personally aren't the beneficiary of the law-breaking, in order to prevent your brain from thinking up up spurious moral arguments for breaking laws whenever it's in your self-interest to do so.

I agree with the common opinion that people like Martin Luther King Jr. and Mahatma Gandhi who used civil disobedience for good ends were right to do so. They were certain enough in their own cause to violate moral heuristics in the name of the greater good, and as such were being good utilitarians.

6.5: What about human rights? Are these also heuristics?

Yes, and political discussion would make a lot more sense if people realized this.

Everyone disagrees on what rights people do or do not have, and these disagreements about rights mirror their political positions only in a more inscrutable and unsolveable way. Suppose I say people should get free government-sponsored health care, and you say they shouldn't. This disagreement is problematic, but it at least seems like we could have a reasonable discussion and perhaps change our minds. But if I assert “People should have free health care because everyone has a right to free health care," then there's not much you can say except “No they don't!" The interesting and potentially debatable question “Should the government provide free health care?" has turned into a purely metaphysical question about which it is theoretically impossible to develop evidence either way: “Do people have a right to free health care?"

And this will only get worse if you respond “And you can't raise my taxes to fund universal health care, because I have a right to my own property!"

Whenever there's a political conflict, both parties figure out some reason why their natural rights are at stake, and the arbitrator can do whatever ey feels like. No one can prove em wrong, because our common notion of rights is an inherently fuzzy concept created mainly so that people who would otherwise say things like "I hate euthanasia, but I guess I have no justification" can now say things like "I hate euthanasia, because it violates your right to life and your right to dignity." (I actually heard someone use this argument a while ago)

Consequentialism allows us to use rights not as a way to avoid honest discussion, but as the outcome of such a discussion. Suppose we debate whether universal health care will make our country a better place, and we decide that it will. And suppose we are so certain about this decision that we want to enshrine a philosophical principle that everyone should definitely get free health care and future governments should never be able to change their mind on this no matter how convenient it would be at the time. In this case, we can say “There is a right to free health care" - i.e. establish a heuristic that such care should always be available.

Our modern array of rights - free speech, free religion, property, and all the rest - are heuristics that have been established as beneficial over many years. Free speech is a perfect example. It's very tempting to get the government to shut up certain irritating people like racists, neo-Nazis, cultists, and the like. But we've realized that we're not very good at deciding who genuinely ought to be silenced, and that once we give anyone the power to silence people they'll probably use it for evil. So instead we enforce the heuristic “Never deny anyone their freedom of speech".

Of course, it's still a heuristic and not a universal law, which is why we're perfectly willing to prevent people from speaking freely in cases where we're very sure it would lower total utility; for example, shouting “Fire!" in a crowded theater.

6.51: So consequentialism is a higher level of morality than rights?

Yes, and it is the proper level on which to think about cases where rights conflict or in which we are not certain which rights should apply.

For example, we believe in a right to freedom of movement: people (except prisoners) should be allowed to travel freely. But we also believe in parents' rights to take care of their children. So if a five year old decides he wants to go live in the forest, should we allow the parents to tell him he can't?

Yes. Although this is a case of two rights conflicting, once we realize that the right to freedom of movement only exists to help mature reasonable people live in the sort of places that make them happy, it becomes clear that allowing a five year old to run away to the forest would result in bad consequences like him being eaten by bears, and we see no reason to follow it.

But what if that child wants to run away because his parents are abusing him? Everyone has a right to dignity and to freedom from fear, but parents also have a right to take care of their children. So if a five year old is being abused, is it okay for him to run away to a foster home or somewhere?

Yes. Although two rights once again conflict, and even though “right to dignity and freedom from fear" might not be a real right and I kinda just made it up, it's more important for the child to have a safe and healthy life than for the parents to exercise their “right" to take care of him. In fact, the latter right only exists as a heuristic pointing to the insight that children will usually do better with their parents taking care of them than without; since that insight clearly doesn't apply here, we can send the child to foster care without qualms.

The proper procedure in cases like this is to change levels and go to consequentialism, not shout ever more loudly about how such-and-such a right is being violated.

6.6: Summary?

Rules that are generally pretty good at keeping utility high are called moral heuristics. It is usually a better idea to follow moral heuristics than to calculate utility of every individual possible action, since the latter is susceptable to bias and ignorance. When forming a law code, use of moral heuristics allows the laws to be consistent and easy to follow. On a wider scale, the moral heuristics that bind the government are called rights. Although following moral heuristics is a very good idea, in certain cases when you're very certain of the results - like saving your friend from an axe murderer or preventing someone from shouting “Fire!" in a crowded theater - it may be permissible to break the heuristic.

PART SEVEN: PROBLEMS AND OBJECTIONS

7.1: Wouldn't consequentialism lead to [obviously horrible outcome]?

Probably not. After all, consequentialism says to make the world a better place. So if an outcome is obviously horrible, consequentialists wouldn't want it, would they?

It is less obvious that any specific formulation of utilitarianism wouldn't produce a horrible outcome. However, if utilitarianism really is a reflective equilibrium for our moral intuitions, it really shouldn't. So the rest of this chapter will be a discussion of why several possible horrible outcomes would not, in fact, be produced by utilitarianism.

7.2: Wouldn't utilitarianism lead to 51% of the population enslaving 49% of the population?

The argument goes: it gives 51% of the population higher utility. And it only gives 49% of the population lower utility. Therefore, the majority benefits. Therefore, by utiltiarianism we should do it.

This is a fundamental misunderstanding of utilitarianism. It doesn't say “do whatever makes the majority of people happier", it says “do whatever increases the sum of happiness across people the most".

Suppose that ten people get together - nine well-fed Americans and one starving African. Each one has a candy. The well-fed Americans get +1 unit utility from eating a candy, but the starving African gets +10 units utility from eating a candy. The highest utility action is to give all ten candies to the starving African, for a total utility of +100.

A person who doesn't understand utilitarianism might say “Why not have all the Americans agree to take the African's candy and divide it among them? Since there are 9 of them and only one of him, that means more people benefit." But in fact we see that that would only create +10 utility - much less than the first option.

A person who thinks slavery would raise overall utility is making the same mistake. Sure, having a slave would be mildly useful to the master. But getting enslaved would be extremely unpleasant to the slave. Even though the majority of people “benefit", the action is overall a very large net loss.

(if you don't see why this is true, imagine I offered you a chance to live in either the real world, or a hypothetical world in which 51% of people are masters and 49% are slaves - with the caveat that you'll be a randomly selected person and might end up in either group. Would you prefer to go into the pro-slavery world? If not, you've admitted that that's not a “better" world to live in.)

7.3: Wouldn't utilitarianism lead to gladiatorial games in which some people are forced to fight and risk death for the amusement of the masses?

Try the same test as before. If I offered you a chance to live in a world with gladiatorial blood sports or our current world, which would you choose?

There are many reasons not to choose the gladiator world. If gladiators are chosen involuntarily, you might end up as one and die. Even if you didn't, you'd have to live in fear of ending up as one, which would be distracting and unpleasant and probably take away from your enjoyment of the games. Speaking of which, do you really enjoy gladiatorial games? Do you really expect the majority of other people to do so? If so, do you expect their preference in favor of the games to be as strong, even when summed up, as an involuntary gladiator's preference against participating?

And do you really expect they would have to force people to become gladiators when people voluntarily join things like football, rugby, and boxing?

Most likely there are thousands of people around who would love to become gladiators if given the choice, and the reason our society doesn't currently hold gladiatorial games is not a lack of gladiators, but the fact that it offends our sensibilities and we would feel upset and outraged knowing that they exist. Utilitarianism can take this upset and outrage into account as well as or better than any currently existing moral system and so we would expect gladiatorial games to continue to be banned.

I know this was a weird question, but for some reason people keep using it as their go-to objection.

7.4: Wouldn't utilitarianism lead to racists' preferences being respected enough that it would support discrimination against minorities, if there are a sufficiently large number of racists and a sufficiently small number of minorities?

First, racists and minorities aren't the only two groups in society. There are also, hopefully, a number of majority group members who have strong enough preferences against racism that they overpower the preferences of the racists.

Second, racists seem unlikely to have as strong a preference in favor of discriminating as minority groups have a preference in favor of not being discriminated against.

Third, racists' preference may not be discrimination per se, but another goal which they use discrimination to accomplish. For example, if a racist thinks minorities are all criminals, and wants to avoid crime, ey may discriminate against minorities. But this racist doesn't have a preference against minorities, ey has a preference against crime. We can respect that preference by trying to lower crime while ignoring the fact that ey happens to be misinformed about whether minorities cause crime or not.

But if there is some form of racism so strong that it overcomes all of these considerations, then this may be one of the cases where a form of utiltiarianism stronger than simple preference utilitarianism is needed. For example, in coherent extrapolated volition utilitarianism, instead of respecting a specific racist's current preference, we would abstract out the reflective equilibrium of that racist's preferences if ey was well-informed and in philosophical balance. Presumably, at that point ey would no longer be a racist.

7.5: Wouldn't utilitarianism lead to healthy people being killed to distribute their organs among people who needed organ transplants, since each person has a bunch of organs and so could save a bunch of lives?

We'll start with the unsatsifying weaselish answers to this objection, which are nevertheless important. The first weaselish answer is that most people's organs aren't compatible and that most organ transplants don't take very well, so the calculation would be less obvious than "I have two kidneys, so killing me could save two people who need kidney transplants." The second weaselish answer is that a properly utiltiarian society would solve the organ shortage long before this became necessary (see 8.3) and so this would never come up.

But those answers, although true, don't really address the philosophical question here, which is whether you can just go around killing people willy-nilly to save other people's lives. I think that one important consideration here is the heuristic-related one mentioned in 6.3 above: having a rule against killing people is useful, and what any more complicated rule gained in flexibility, it might lose in sacrosanct-ness, making it more likely that immoral people or an immoral government would consider murder to be an option (see David Friedman on Schelling points).

This is also the strongest argument one could make against killing the fat man in 4.5 above - but note that it still is a consequentialist argument and subject to discussion or refutation on consequentialist grounds.

7.6: Wouldn't utilitarianism mean if there was some monster or alien or something whose feelings and preferences were a gazillion times stronger than our own, that monster would have so much moral value that its mild inconveniences would be more morally important than the entire fate of humanity?

Maybe.

Imagine two ant philosophers talking to each other about the same question. “Imagine," they said, “some being with such intense consciousness, intellect, and emotion that it would be morally better to destroy an entire ant colony than to let that being suffer so much as a sprained ankle."

But I think humans are such a being! I would rather see an entire ant colony destroyed than have a human suffer so much as a sprained ankle. And this isn't just human chauvinism either - I think I could support my feelings on this issue by pointing out how much stronger feelings, preferences, and experiences humans have than ants (presumably) do.

I can't imagine a creature as far beyond us as we are beyond ants, but if such a creature existed I think it's possible that if I could imagine it, I would agree that its preferences were vastly more important than those of humans.

7.7: Wouldn't utilitarianism require us to respect every little stupid preference someone has, like if some Muslim gets offended when people draw pictures of Mohammed, or whatever, then everyone has to stop drawing Mohammed?

I asked this question in Less Wrong and got some interesting answers back. The first and most important answer was yes, if an action causes harm to a group, whether physical or psychological, without providing any benefits to any other group, stopping that action would be a nice thing to do.

However, it's also possible that the reaction we would call “offense" isn't always an expression of violation of a strong preference, but of a group demanding status. So if a Muslim gets really offended at hearing about a cartoon of Mohammed, it's not that ey experienced “psychic pain" or “preference violation" so much as that getting upset about it is a way of showing how much ey likes Islam.

Other responses went into game theory; it may sometimes be in people's benefits to self-modify into a utility monster if they want to constrain the behavior of other agents, but other agents should precommit not to take this self-modification into account in order to discourage it.

Finally, there was a slippery slope argument: although not drawing Mohammed would probably have no effects other than making a couple of Muslims happier, it would set a precedent for always backing down when things were considered “offensive", and eventually this precedent would force us to stop activities that are genuinely useful.

7.8: Way back in 5.6 you addressed the question of whether utilitarianism was opposed to art and music and nature. You said it wasn't by design opposed to these things, and that makes sense. But might it not end up that art and music and nature just aren't very efficient at raising utility, and would have to be thrown out so we could redistribute those resources to feeding the hungry or something?

If you were a perfect utilitarian, then yes, if you believe that feeding the hungry is more important than having symphonies, you would stop funding symphonies in order to have more money to feed the hungry. But this is your own belief; Jeremy Bentham isn't standing behind you with a gun making you believe it. If you think feeding the hungry is more important than listening to symphonies, why would you be listening to symphonies instead of feeding the hungry in the first place?

Furthermore, utilitarianism has nothing specifically against symphonies - in fact, symphonies probably make a lot of people happy and make the world a better place. People just bring that up as a hot-button issue in order to sound scary. There are a thousand things you might want to consider devoting to feeding the hungry before you start worrying about symphonies. The money spent on plasma TVs, alcohol, and stealth bombers would all be up there.

I think if we ever got a world utilitarian enough that we genuinely had to worry about losing symphonies, we would have a world utilitarian enough that we wouldn't. By which I mean that if every government and private individual in the world who might fund a symphony was suddenly a perfect utilitarian dedicated to solving the world hunger issue among other things, their efforts in other spheres would be able to solve the world hunger issue long before any symphonies had to be touched.

Efficient charity is a big issue for utilitarians, but remember that if you're doing it right, each step you take towards consequentialism should result in greater satisfaction of your own moral goals and a better world by your own standards.

7.9: Doesn't utilitarianism sounds a lot like the idea that “the end justifies the means"?

The end does justify the means. This is obvious with even a few seconds' thought, and the fact that the phrase has become a byword for evil is a historical oddity rather than a philosophical truth.

Hollywood has decided that this should be the phrase Persian-cat-stroking villains announce just before they activate their superlaser or something. But the means that these villains usually employ is killing millions of people, and the end is subjugating Earth beneath an iron-fisted dictatorship. Those are terrible means to a terrible end, so of course it doesn't end up justified.

Next time you hear that phrase, instead of thinking of a villain activating a superlaser, think of a doctor giving a vaccination to a baby. Yes, you're causing pain to a baby and making her cry, which is kinda sad. But you're also preventing that baby from one day getting a terrible disease, so the end justifies the means. If it didn't, you could never give any vaccinations.

If you have a really important end and only mildly unpleasant means, then the end justifies the means. If you have horrible means that don't even lead to any sort of good end but just make some Bond villain supreme dictator of Earth, then you're in trouble - but that's hardly the fault of the end never justifying the means.

7.10: It seems impossible to ever be a good person. Not only do I have to avoid harming others, but I also have to do everything in my power to help others. Doesn't that mean I'm immoral unless I donate 100% of my money (maybe minus living expenses) to charity?

In utilitarianism, calling people “moral" or “immoral" borders on a category error. Utiltiarianism is only formally able to say that certain actions are more moral than other actions. If you want to expand that and say that people who do more moral actions are more moral people, that seems reasonable, but it's not a formal implication of utilitarian theory.

Utilitarianism can tell you that you would be acting morally if you donated 100% of your money to charity, but you already knew that. I mean, Jesus said the same thing two thousand years ago (Matthew 19:21 - “If you want to be perfect, go and sell all your possessions and give the money to the poor “).

Most people don't want to be perfect, and so they don't sell all their possessions and give the money to the poor. You'll have to live with the knowledge of being imperfect, but Jeremy Bentham's not going to climb through your window at night and kill you in your sleep or anything. And since no one else is perfect, you'll have a lot of company.

That having been said, there are people who take the idea of donating as much as possible seriously, and they are some pretty impressive people.

PART EIGHT: WHY IT MATTERS

8.1: If I promise to stay away from trolleys, then does it really make a difference what moral system I use?

Yes.

The majority of modern morality is a bunch of poorly designed attempts to look good without special consideration for whether they screw up the world. As a result, the world is pretty screwed up. Applying a consequentialist ethic to politics and to everyday life is the first step in unscrewing it.

The world has more than enough resources to provide everyone, including people in Third World countries, with food, health care, and education - not to mention to save the environment, prevent wars, and defuse existential risks. The main thing stopping us from doing all these nice things is not a lack of money, or a lack of technology, but a lack of will.

Most people mistake this lack of will for some conspiracy of evil people trying to keep the world divided and unhappy for their own personal gain, or for “human nature" being fundamentally selfish or evil. But there's no conspiracy, and people can be incredibly principled and compassionate when the opportunity arises.

The problem is twofold: first that people are wasting their moral impulses on stupid things like preventing Third World countries from getting birth control or getting outraged at some off-color comment by some politician. And second that people's moral systems are vague and flexible enough that they can quiet their better natures by saying anything inconvenient or difficult isn't really morally necessary.

To solve those problems requires a clear and reality-based moral system that directs moral impulses to the places they do the most good. That system is consequentialism.

8.2: How can utilitarianism help political debate?

In an ideal world, utilitarianism would be able to reduce politics to math, pushing through the moralizing and personal agendas to determine what policies were most likely to satisfy the most people.

In the real world, this is much harder than it sounds and would get bogged down by personal biases, unpredictability, and continuing philosophical confusions. However, there are tools by which such problems could be resolved - most notably prediction markets, which can provide a mostly-objective measure of the probability of an event.

There are many cases in which the consequentialist thing to do is to be very wary of consequentialist reasoning - for example, we know that centrally planned markets have bad consequences, and so even if someone provided a superficially compelling argument for why a communism-type plan might raise utility, we would have to be very skeptical. But a more developed science of consequentialist political discourse would aid us, not hinder us, in making those judgments.

For interesting examples of utilitarian political discourse, take a look at this essay on immigration or my own essay on health care policy.

8.3: You talk a big talk. Give an example of how switching to consequentialist ethics could save thousands of lives with no downside.

Okay. How about opt-out organ donations?

Right now organ donations are opt-in, which means you have to fill out some forms and carry a little card around with you if you want your organs to be used to help others if you die. Most people, when asked, approve of having their organs used to help others if they die, but haven't bothered filling out the forms and getting the little card.

At the same time, about a thousand people die each year because there aren't enough organs for everyone, and many times that number suffer poor health for years before finally getting a transplant.

A few countries, such as Spain, had a very clever idea - why not switch to opt-out organ donations? In opt-out organ donations, everyone is signed up to donate organs after death by default. If you don't want to, you can fill out some forms and carry a little card and then you don't have to. It's the opposite of our own system.

In America, this was rejected on the grounds that someone might accidentally forget to fill out the forms, and then die, and then their organs would be used to save someone else's life when they hadn't consented to that.

So on the one hand, we have the lives of a thousand people a year, plus the suffering of many more. On the other, we have the (still entirely theoretical) fear that maybe someone might both really not want their organs given away, but apparently not enough to sign a form saying so, and so would be really upset about losing their organs if they were able to be upset about things which they're not because they happen to be dead at the time.

Remember back in 3.5, when I said that the more useless an option, the better signaling opportunity it provides? Well, being against opt-out organ donations makes a heckuva signaling opportunity. So it's no surprise that professional ethicists, the people who have the most incentive to prove they're more moral than everyone else, have mostly come out against it. They are so very moral that they refuse to ever violate anyone's hypothetical preference, even if they are dead and didn't care enough to sign a piece of paper and relaxing the rules this one time would save a thousand lives a year. Are they great ethicists, or what?

Well, if you've read the rest of this FAQ, hopefully you will answer “what", which makes you better than much of the academic ethicist community, the government, and the voting public.

Yes, a simple common-sense intervention to save a thousand lives a year has not been tried because people are insufficiently consequentialist. This is not nearly the end of the low-hanging fruit available by getting a saner moral system.

8.4: I am interested in learning more about utilitarianism. Where can I do so?

Less Wrong is a great community full of some very smart people where utilitarianism is often discussed. Felicifia is a community specifically about utilitarianism, although I have not been there much and cannot vouch for it. And Giving What We Can is an amazing utilitarianism-oriented group with a almost militant approach to efficient charitable giving.

Derek Parfit's Reasons and Persons and Gary Drescher's Good and Real are two excellent books about morality that consequentialists might find useful.

And game theory and decision theory are two peripheral fields that often come up in consequentialist systems of morality.

Wikipedia also contains discussion of and further links about consequentialism and utilitarianism.

8.5: I have a question or comment about, or a rebuttal to, this FAQ. Where should I send it?

scott period siskind at-symbol gmail period com should work, but be aware I am terrible about replying to email in a timely fashion/at all.