By Simon Knutsson
Unpublished working paper
Last update: 12 Sep. 2017
The most common argument against negative utilitarianism is the world destruction argument, according to which negative utilitarianism implies that if someone could painlessly kill everyone or destroy the world, it would be her duty to do so. Those making the argument often endorse some other form of consequentialism, usually traditional utilitarianism, despite the fact that there are similar arguments against such theories. I investigate whether negative or traditional utilitarianism is more vulnerable to world destruction-like arguments, and conclude that such arguments are roughly as persuasive against negative utilitarianism as against traditional utilitarianism. Those who make the world destruction argument against negative utilitarianism, while being sympathetic to another form of consequentialism or a morality with a consequentialist element, should explain why their theory or morality is less vulnerable than negative utilitarianism to such arguments.
- 1 Introduction
- 2 Elimination-like Arguments and Types of Replies to Them
- 3 Realistic and Unrealistic Cases
- 4 Indirect Act-utilitarianism
- 5 Is Killing Everyone More Likely to Become Optimal from a Negative or a Traditional Utilitarian Perspective?
- 6 Palatability of the Purported Implications
- 7 Conclusions and Future Research
- 8 Notes
Negative utilitarianism is often understood as the moral theory whose only prescription is that we should minimize suffering or negative well-being, and that is the conception I will assume here.1 The most discussed argument against negative utilitarianism is roughly this: negative utilitarianism implies that one should kill all humans or all sentient life, or destroy the world, if one had the opportunity. Such an action would be ‘wicked,’ and hence the plausibility of negative utilitarianism is undermined.2 I call this ‘the world destruction argument,’ but will for brevity’s sake mostly refer to it as ‘the elimination argument.’3
In 1955, Ingemar Hedenius made this argument in Swedish against his own form of consequentialism, according to which some evils cannot be counterbalanced by goods. An English formulation followed in 1958 by R. N. Smart who argued against negative utilitarianism.4 The argument is often mentioned in applied and interdisciplinary writings, and it has been endorsed by philosophers such as J. J. C. Smart, Mario Bunge, David Heyd, Gustaf Arrhenius and Krister Bykvist; as recently as 2013 by Toby Ord; and in 2015, it appears, by Torbjörn Tännsjö.5
Those making the argument often express sympathy for some other form of consequentialism, usually traditional utilitarianism – that is, some form of utilitarianism in which happiness and suffering have equal weight or importance. Similar arguments, however, have been made against such theories as well. For example, in 1984 Dale Jamieson wrote the following about traditional utilitarianism and killing everyone:
Many philosophers have rejected TU [total utilitarianism] because it seems vulnerable to the Replacement Argument and the Repugnant Conclusion…. The Replacement Argument purports to show that a utilitarian cannot object to painlessly killing everyone now alive, so long as they are replaced with equally happy people who would not otherwise have lived.6
It is peculiar that the world destruction argument continues to be used against negative utilitarianism, while the fact that similar arguments have been made against traditional utilitarianism – and that new such arguments could easily be formulated against traditional utilitarianism, other forms of consequentialism and even against some moralities that merely contain a consequentialist component – is rarely mentioned. People with such moralities who wish to use the world destruction argument against negative utilitarianism thus need to explain why their morality is less vulnerable to similar arguments.
In this paper, I investigate which form of utilitarianism – negative or traditional – is more vulnerable to elimination-like arguments, an investigation that no one has done before. I conclude that they are roughly in the same boat: elimination-like arguments are roughly as persuasive against negative utilitarianism as against traditional utilitarianism.
To be clear, no one in the philosophical literature has, to my knowledge, claimed that negative utilitarianism implies that a regular person in our world is obliged to try to kill everyone. Rather, the elimination argument is usually phrased in terms of what negative utilitarianism would purportedly imply in a hypothetical scenario in which someone has access to a weapon that could kill everyone painlessly.
I will focus on negative versus traditional total act-utilitarianism for simplicity, and because these theories have primarily been contrasted in previous discussions of the elimination argument. I will understand the theories as the following criteria of rightness:
Negative total act-utilitarianism: An act is right if and only if it results in a sum of negative well-being that is at least as small as that resulting from any other available act.
Traditional total act-utilitarianism: An act is right if and only if it results in a sum of well-being (positive well-being minus negative well-being) that is at least as great as that resulting from any other available act.
Since it matters little to my main points whether one formulates these theories in terms of actual or expected results, I will sometimes speak of expected results or expected value, and sometimes simply results. For simplicity, I will often speak of happiness and suffering instead of positive and negative well-being, and will concentrate on individual rather than group agents.
Elimination-like Arguments and Types of Replies to Them
Others have mentioned at least five types of cases about killing everyone related to negative or traditional utilitarianism. I here rephrase them somewhat and give them new names (except the name ‘Elimination,’ which is the term used by Arrhenius and Bykvist).7 The following are the two cases against negative utilitarianism:
Elimination: Someone can painlessly kill all humans or all sentient beings on Earth. Negative utilitarianism implies that it would be right to kill everyone.8
Paradise with Suffering: The world has become a paradise, yet would contain some (possibly mild and brief) suffering if it remained. Someone can instantly and painlessly kill everyone in this paradise, which negative utilitarianism implies that it would be right to do.9
The following are the three cases against traditional utilitarianism:
Traditional Elimination: Someone can painlessly kill all humans or all sentient beings on Earth. The sum of positive and negative well-being in the future would be negative, regardless of which other available act she would perform. Traditional utilitarianism implies that it would be right to kill everyone.10
Suboptimal Earth: Someone can kill all humans or all sentient beings on Earth and replace us with new sentient beings such as genetically modified biological beings, brains in vats or sentient machines. The new beings could come into existence on Earth or elsewhere. The future sum of well-being would thereby become (possibly only slightly) greater. Traditional utilitarianism implies that it would be right to kill and replace everyone.11
Suboptimal Paradise: The world has become a paradise with no suffering. Someone can kill everyone in this paradise and replace them with beings with (possibly only slightly) more total well-being. Traditional utilitarianism implies that it would be right to kill and replace everyone.12
I call these five and similar cases or arguments ‘world destruction-like’ and ‘elimination-like.’ One can adjust these cases and add details to make it more plausible that killing everyone would be the optimal act from the perspective of the form of utilitarianism being considered. One can, for example, add that the agent would not get caught, that she has few other attractive options and so on. I have formulated the cases in terms of killing everyone and omitted world destruction because it is simpler and more realistic, and because this omission matters little when one compares negative and traditional utilitarianism, since they share the idea that only well-being has final value.
Elimination-like cases have been discussed little compared to other, well-known, smaller-scale counterexamples to utilitarianism – such as the doctor who can kill one patient to harvest her organs and give them to five others, the sheriff who can frame and execute an innocent person to prevent riots, and replacement cases such as killing infants or non-human animals as long as they can be replaced with new individuals with at least as much well-being.13
Replying to these smaller-scale cases, utilitarians have given at least four kinds of arguments, upon which I will base my investigation. These are: argue that the cases are unrealistic and therefore lack force as objections;14 endorse indirect act-utilitarianism;15 argue that killing would not be the optimal act in real life;16 or argue that killing would be less unpalatable than the other available actions or the implications of competing moral theories.17
Over the next four sections, I deal in turn with these kinds of replies – including combinations thereof – to elimination-like arguments, as I compare negative and traditional utilitarianism.
Realistic and Unrealistic Cases
To claim that counterexamples lack force as objections if they are unrealistic is an important part of a reply to elimination-like cases. Otherwise, one could simply stipulate scenarios such that killing everyone would be optimal.
What makes a case unrealistic depends on why unrealistic cases are supposed to have less or no force. Jakob Elster distinguishes between two reasons against the use of outlandish thought experiments: ‘1) Since moral principles are meant for guiding action in this world, cases drawn from other worlds are irrelevant. 2) We lack the capacity to apply our intuitive moral competence to outlandish cases.’18
With regard to the second reason, a plausible view is that all five elimination-like cases are realistic in the sense that we can apply our moral competence to them. A traditional utilitarian may object that we cannot apply our moral competence to Suboptimal Paradise and Suboptimal Earth, because we do not grasp the extraordinary happiness or the large number of the entities that would replace us. I do not find that objection persuasive, but, in any case, a negative utilitarian can give a similar reply to Elimination: we do not grasp the extraordinary suffering or the large number of the beings that would suffer in the future if we survive.
Regardless, one could phrase the cases against traditional and negative utilitarianism so that they only assume typical human levels of happiness or suffering and stipulate a lower number of beings – and still paint hypothetical scenarios wherein killing everyone would purportedly be the optimal act from both traditional and negative utilitarian perspectives.
The reply that some elimination-like cases are too unrealistic is more convincing, assuming that moral principles are meant for guiding action in our particular world. What, then, makes a case unrealistic? When R. M. Hare replies to small-scale counterexamples to utilitarianism, he speaks of such cases being unlikely to occur, so I will understand ‘unrealistic’ as unlikely to occur.19 By ‘occur,’ I mean that at some point in time, at least one agent is in a situation wherein the described act is both available and optimal according to the form of utilitarianism under consideration.
I am not aware of any suggested probability threshold beyond which a case is considered too improbable to have force as an objection. However, based on Hare’s view on other cases, the elimination-like cases in which the world has become a paradise would presumably count as so unlikely to occur as to be disregarded.20 For example, in Paradise with Suffering, a negative utilitarian can argue that it is unlikely that the world will become a paradise – and even if it did, it would, in real life, be wrong to try to instantly and painlessly kill everyone in paradise to avoid some minor suffering. After all, the potential gain in terms of suffering reduced is small compared to the potential loss: there would be a sufficient risk that one would fail and turn an almost perfect outcome into a disaster. Hereafter, I will mainly consider real-world cases.
The second kind of reply to elimination-like arguments is to endorse indirect act-utilitarianism, which combines the act-utilitarian criterion for the rightness of acts with the idea that we should generally not think in act-utilitarian terms when conducting our lives and deciding what to do in particular cases. Instead, to indirectly optimize the results of our actions, we should perhaps internalize deontological moral rules and develop various character traits. In this view, a sensible person may not even consider killing everyone as an option, or may not kill because it conflicts with her moral feelings.21
The appeal to indirect act-utilitarianism is a response to allegations such as that a utilitarian who thinks properly about what to do would kill everyone. It is not, however, a satisfactory reply. Even if it were optimal if people generally had dispositions and internalized rules such that they would never kill everyone, what dispositions and rules one should adopt to indirectly optimize the results of one’s acts vary by person and time. It is plausible that if killing everyone would be optimal considered in isolation, then it would be optimal for at least someone – say, a president, dictator or corporate leader – to be prepared to kill everyone in special circumstances when vast amounts of well-being are at stake. Moreover, indirect act-utilitarianism does not imply that one should never make act-utilitarian calculations when deciding what to do in particular cases, and the huge-stakes choices in the elimination-like cases are strong candidates for situations wherein calculating would indeed be optimal.
Is Killing Everyone More Likely to Become Optimal from a Negative or a Traditional Utilitarian Perspective?
The third reply to elimination-like cases is to argue that the purportedly wicked acts would not be optimal in real life. That is, one could argue that the purportedly wicked acts would not result in the smallest sum of suffering or the greatest sum of well-being if one is defending negative or traditional utilitarianism, respectively. Since I am comparing these two forms of utilitarianism, I will focus on whether it is more likely that a real-world situation will occur in which it is optimal to perform a purportedly wicked elimination-like act according to negative utilitarianism, as opposed to the analogous case for traditional utilitarianism.
Regarding cases against traditional utilitarianism, I will focus on versions of Suboptimal Earth. This appears to be a stronger objection to traditional utilitarianism than Traditional Elimination, and a traditional utilitarian can respond to Traditional Elimination by arguing that the expected sum of well-being in the future is positive (itself a matter of debate beyond the scope of this paper). Against negative utilitarianism, I will focus on Elimination. To make the cases more realistic, I will drop the assumption that the killings would be painless.
Wild Animals, Evolution and Space
Several existing ideas speak against the probability that a real-world situation will occur in which negative utilitarianism implies that it is optimal to kill all humans or all sentient beings on Earth. If merely all humans died, there would be room for more suffering wild animals,22 and humans would no longer be able to reduce wild animal suffering, which we may do if we survive.23 If all sentient beings on Earth died, beings that suffer could still evolve again on Earth.24 In addition, from a negative utilitarian perspective, a key risk of there being no humans on Earth is that there may be suffering in other parts of the universe that we may reduce if we survive;25 or, at least us spreading though space may result in less suffering than if spacefaring aliens do it instead of us.26 Similarly, if all humans or all sentient beings on Earth were killed, a new spacefaring civilization may eventually develop on Earth, and if it were to colonize space, it is an open question whether it would result in more suffering than if we were to do it instead.27 Perhaps most exotically, if we are not killed, humans or our descendants may reduce the number of universes that come into existence naturally in, for example, a multiverse.28
There are also counter-considerations, however. For instance, human extinction would imply that we would not multiply suffering beyond Earth by colonizing space or, more speculatively, by creating new universes.29 At least the killing of all humans, and more so the killing of all sentient life on Earth, would presumably reduce the likelihood of such space endeavors, because a new spacefaring civilization might not have time to evolve before Earth becomes uninhabitable to such life forms.
In light of these considerations, my guess is that all sentient beings on Earth, and even merely all humans, dying now would result in less expected suffering.30 But these considerations are speculative and inconclusive, and there is a need for more analysis – especially of whether and how much we would reduce or increase suffering beyond Earth if we survive.
One could argue, both from a negative and a traditional utilitarian perspective, that killing everyone on Earth is unlikely to become optimal in real life partly because it would result in negative well-being among us or other beings that have lived on Earth, for example, because some deaths would be painful or because aversions to getting killed would be fulfilled.31 I do not find this argument especially convincing, regardless of whether it is used in defense of negative or traditional utilitarianism.
My first reason is the utilitarian urgency and importance of cosmic stakes.32 It may be optimal, both from a negative and traditional utilitarian perspective, to essentially ignore the well-being and death among sentient beings on Earth today and instead focus available time and resources on stakes at the level of galaxy groups, the universe, or the multiverse. It is debatable whether this would render it optimal to run us over or use up resources so that we starve, or to let us be and focus on more important matters beyond Earth. In any case, from a cosmic perspective, it seems our tiny amount of well-being would have little impact on this calculation.
My second reason is that even if our well-being is important enough to warrant resource investment, there remains the open question of whether it would be more efficient to improve our well-being or kill us. From a traditional utilitarian perspective, there is a pressure to optimize towards more well-being. If, for whatever reason, we could not become sufficiently effective happiness producers, or if it would be too costly to turn us into that, there would be a pressure from a traditional utilitarian perspective not to forfeit the happiness that could otherwise be produced by other entities if we were killed and replaced by them.33 From a negative utilitarian perspective, it may be optimal to let us live and phase out suffering through, for example, genetic engineering,34 unless that is too costly or difficult, in which case killing us may be the cheaper and simpler solution.
All in all, from both a negative and a traditional utilitarian perspective, considerations about the well-being of current sentient beings on Earth, for its own sake, seem to lend little support to the purported unlikelihood of eliminating everyone someday becoming the optimal choice. At any rate, my case only needs the weaker conclusion that the force of such considerations is similar whether we are discussing negative or traditional utilitarianism, which appears plausible.
Letting Us Live for Tactical Reasons
Naturally, in the real world, there are strong tactical reasons from both a negative and a traditional perspective to compromise and accommodate others’ wishes, partly in order to increase the chances that one at least accomplishes one’s most important goals.35 This speaks against elimination as a plausible optimal action in real life from either perspective. Our question thus becomes how likely it is that a situation will occur in which it is optimal to deviate from this generally plausible strategy of being nice. We would be looking for a realistic situation in which the normal, tactical considerations about consideration for others’ values do not apply to some specific agent. Once again, I want to note that similar considerations come into play whether we investigate the implications of negative or traditional utilitarianism.
Regardless of tactical considerations among us on Earth, one can argue that killing everyone on Earth is unlikely to be optimal for multiverse-wide cooperative reasons. This argument assumes a non-causal decision theory and goes roughly as follows: It is sufficiently likely that there are many agents relevantly similar to us, such as agents in another universe or in remote parts of a spatially (or temporally) infinite universe. The argument does not rest on any causal interaction among them or between them and us. When calculating the expected value of an available act – say, killing everyone on Earth – one should account for the information that one (hypothetically) makes that choice. Your hypothetical choice would then amount to one data point about how relevantly similar agents may act in relevantly similar situations. If the act under consideration is to kill, one should thus take into account that if one chooses to kill, it should increase one’s estimate of the likelihood that other relevantly similar agents would act similarly – especially disregard that others disapprove of their acts. This can reduce the expected value of the act, because it can increase one’s subjective likelihood that others disregard of one’s disapproval of various acts. One’s act is only one data point, so the update to one’s subjective likelihood that others will behave similarly may be modest – but the effect on one’s calculation of expected value can be big if there are sufficiently many relevantly similar agents who make choices with sufficiently high stakes from one’s perspective.36
This argument can seemingly be used to defend negative and traditional utilitarianism roughly equally well against elimination-like arguments. If someone holds that the argument is a more successful defense of one theory over another, that case remains to be heard.
Methods of Killing
A common view seems to be that killing everyone without replacement is realistic, while killing everyone and replacing us with more well-being is science fiction, but there are reasons to doubt this view. In defense of negative utilitarianism, one can argue that it is unlikely to become optimal to try to kill everyone because it would be exceedingly difficult to kill all known humans or sentient beings without replacing us with something that results in more suffering. Even in the event of a nuclear, it appears that it would not kill all humans,37 and cobalt bombs are apparently not the doomsday machines they are sometimes made out to be.38 Regarding whether an engineered pathogen could ‘wipe out all of humanity,’ Anthony Fauci, Director of the National Institute of Allergy and Infectious Diseases, says, ‘It would be very, very difficult to do that.’39 Another doomsday scenario, based on nanotechnology, involves runaway self-replicating nanorobots – so-called ‘grey goo’ – killing all humans or even consuming the biosphere; however, according to the Center for Responsible Nanotechnology, ‘goo would be extremely difficult to design and build.’40 Besides goo, nanotechnology could potentially be used to create new weapons of mass destruction,41 but it appears challenging to create such weapons that would circumvent all countermeasures and back-ups and hence kill all humans.
What about death by artificial intelligence (AI)? Brian Tomasik writes that ‘the only known technological development that is highly likely to cause all-out human extinction is AGI [artificial general intelligence].’42 (Roughly speaking, AGI refers to an AI with at least human-level intelligence in a wide range of areas.) However, extinction by AGI differs crucially from the aforementioned extinction scenarios in that it carries a higher likelihood of humans being replaced by vastly more potentially sentient beings beyond Earth.43 One reason to believe this is the widespread and plausible idea that an AGI would expand beyond Earth in order to acquire more resources for pursuing its final goals.44 This risk of an astronomical increase in suffering speaks against that killing everyone using an AGI will become optimal from a negative utilitarian perspective. Similar reasons apply from a traditional utilitarian perspective; developing or unleashing an AGI that will kill everyone involves the risk that something might fail, thus preventing vast amounts of well-being from being created beyond Earth and potentially even causing vast amounts of suffering to be created instead. It is plausible that a careful, peaceful approach to AGI will be optimal from a traditional utilitarian perspective. One can still argue, however, that if the future develops such that an agent can be sufficiently confident that an AGI will act in line with certain values, it may in some realistic scenarios become optimal for that agent – from either a negative or traditional utilitarian perspective – to cause an AGI to kill everyone on Earth. I suppose a case could be made that such a scenario is more or less likely to occur from a negative or traditional utilitarian perspective, but this would require a more detailed analysis of AI scenarios than I have space for here.
More Realistic Cases
Some may argue that the cases discussed thus far are too unrealistic because it is so unlikely that anyone would end up in a position wherein any of the purportedly wicked actions would be optimal. However, one can construct more realistic cases against both negative and traditional utilitarianism. Some cases could be about contributing to research or development of technology that would increase the likelihood of elimination-like outcomes. Even more realistic cases concern only exploring different elimination-like options, such as merely analysing or thinking about them. When doing pairwise comparisons of such realistic cases that one can formulate against negative and traditional utilitarianism, they seem to me roughly equally likely to occur.
All in all, it seems roughly equally likely that killing everyone without replacement will become optimal in the real world from a negative utilitarian perspective as it does that killing everyone and replacing us with more well-being will become optimal from a traditional utilitarian perspective. The same goes for more realistic cases that involve purportedly wicked elimination-like acts such as to merely explore different elimination-like options.
Be that as it may, my most important point in this section is the following: There are, with regard to both negative and traditional utilitarianism, many complicated considerations for and against the plausibility that killing everyone will ever become optimal for an agent in real life. If someone argues that traditional utilitarianism is more plausible than negative utilitarianism because negative utilitarianism more probably implies that killing everyone will become optimal in real life, she needs to explain specifically why that is so.
Palatability of the Purported Implications
Elimination-like cases are meant to show that a moral theory has unpalatable implications, in the sense that it implies that it would be right to perform a wicked or unpalatable act. Are the acts in elimination-like cases against negative utilitarianism more unpalatable than the acts in such cases against traditional utilitarianism? To answer this question, we can make pairwise comparisons of analogous cases.
Disregarding whether either case is unrealistic, we can compare painlessly killing everyone in paradise in order to avoid some minor negative well-being with painlessly killing everyone in paradise and replacing them with new beings with slightly more positive well-being in total. Killing in these two cases seems roughly equally absurd and unpalatable.
What if we formulate some of the most unpalatable cases against the theories, still disregard whether or not they are realistic, and then attempt a comparison? Against traditional utilitarianism, we can imagine the act of killing everyone on Earth in gruesome ways; destroying everything that we typically care about; creating vast numbers of beings with extremely negative well-being, such that the amount of suffering becomes vastly greater than it otherwise would have been; and creating sufficiently many beings with positive well-being, such that the sum of well-being is positive and slightly greater than it otherwise would have been. Against negative utilitarianism, we can similarly imagine killing everyone on Earth in gruesome ways, destroying everything that we typically care about, and creating vast numbers of beings with extremely negative well-being in order to kill and thereby reduce the suffering of other beings beyond Earth – such that the sum of positive well-being ends up close to zero, and the sum of negative well-being becomes slightly smaller than it otherwise would have been. Of these two, I find the case against traditional utilitarianism much more unpalatable, as it would vastly increase the number beings with extremely negative well-being just to slightly increase the surplus of positive well-being. At least the negative utilitarian can, in her defense, claim that the act would be a regrettable choice of the lesser evil to reduce the overall amount of suffering – but this defense is not available to the traditional utilitarian.
Let us turn to more realistic cases, which arguably carry more weight when we test moral theories. I will assume that being killed would be painful in each of these scenarios.
If the sum of well-being in the future would be negative if we survived, then both negative and traditional utilitarianism are open to the objection that they imply that it would be right to simply kill everyone without replacement. This would be similarly unpalatable, regardless of which theory the act is purportedly an implication of.
If, on the contrary, the sum of well-being in the future would be positive if we survived, traditional utilitarianism is no longer open to that allegation, so we need to consider acts like killing everyone and replacing us with beings with more well-being. How unpalatable that would be depends on the details about, for example, the amount of suffering produced along the way. I will compare two of the more realistic cases. One case against traditional utilitarianism could involve to contributing to research or development of technology, such as an above-human intelligence, that may lead to everyone being killed, vast amounts of suffering being created or vast amounts of happiness being created (or a combination of the three). How unpalatable this act would be depends on details about the outcomes and their likelihoods. However, gambles that could reasonably still be optimal from a traditional utilitarian perspective, despite involving a substantial risk of bringing about vast amounts of negative well-being, strike me as no less unpalatable – even more unpalatable – than a comparably realistic case against negative utilitarianism, such as contributing to the development of technology that increases the likelihood of everyone getting killed without replacement.
Overall, in some of the pairwise comparisons of the elimination-like cases that I have considered against negative and traditional utilitarianism the palatability is roughly equal, and for several of the comparisons the cases against traditional utilitarianism are more unpalatable; sometimes much more unpalatable. However, I do not want to make much of these judgements of mine. My point in this section is that it is not obvious that the purported elimination-like implications of negative utilitarianism are more unpalatable than those of traditional utilitarianism.
Conclusions and Future Research
World destruction- or elimination-like arguments exist against both traditional and negative utilitarianism, and a number of replies are available to both theories. I have not found any of these replies more convincing when offered in defense of traditional utilitarianism versus in defense of negative utilitarianism. At any rate, anyone making the world destruction argument (or similar arguments such as Paradise with Suffering) against negative utilitarianism, while being sympathetic to another form of consequentialism or a morality with a consequentialist element, should explain why their theory or morality is less vulnerable to elimination-like arguments than negative utilitarianism.
I have focused on world destruction-like arguments, but one can make similar analyses of other common objections to negative utilitarianism, especially when those who offer the objections are sympathetic to consequentialism. For example, the second most common objection to negative utilitarianism seems to be that it purportedly implies that one has no obligation to raise the happiness of many individuals, prevent a decrease in their happiness or bring into existence many new happy beings, even if the cost of doing so would be zero or trivial.45 Analyzing this objection along the lines of this paper could involve questioning whether negative utilitarianism in the real world has the implications that the objection claims. It seems, for example, plausible that negative utilitarianism implies that one should in general increase others’ happiness if it could be done at no or trivial cost, partly because those who are happier tend to suffer less. Moreover, one could consider indirect negative act-utilitarianism, according to which it would presumably be right for people in general to develop dispositions towards increasing others’ happiness if one can do so at low cost. Such an analysis of this objection to negative utilitarianism would, I expect, conclude that a good response could be given based on cooperation reasons, or along the following lines: If the objector wants to avoid unrealistic counterexamples to her morality, she must accept that such examples need to be realistic, and then the negative utilitarian can reply that it is unrealistic that one would be able to bring about the increase, or prevent the decrease in, positive well-being at no or trivial cost, understood as the suffering the agent instead could have prevented.46
1 This is a strong form of negative utilitarianism, because the only prescription is the reduction of negative well-being. Weak versions of negative utilitarianism give weight to both positive and negative well-being, but more weight to negative well-being. See James Griffin, ‘Is unhappiness morally more important than happiness?’, Philosophical Quarterly 29, 114 (1979): 47–55; Gustaf Arrhenius and Krister Bykvist, ‘Future generations and interpersonal compensations: Moral aspects of energy use’ (Uppsala, 1995).
2 R. N. Smart, ‘Negative utilitarianism’, Mind 67, 268 (1958): 542–3, at p. 542.
3 The phrase ‘the elimination argument’ is from Arrhenius and Bykvist op. cit., p. 31. They also direct the argument against a weak form of negative utilitarianism that gives lexical weight to suffering (p. 40).
4 Ingemar Hedenius, Fyra dygder (Stockholm: Albert Bonniers Förlag, 1955), pp. 45, 100–5; R. N. Smart op. cit.
5 J. J. C. Smart, ‘An outline of a system of utilitarian ethics’ in J. J. C. Smart and B. Williams (eds.) Utilitarianism: For and against (London: Cambridge University Press, 1973): 3–74, at p. 29; Mario Bunge, Treatise on Basic Philosophy: Volume 8: Ethics: The Good and the Right 1 edition (Dordrecht: Springer, 1989), viii, p. 230; David Heyd, Genethics: Moral Issues in the Creation of People (Berkeley: University of California Press, 1992), p. 60; Arrhenius and Bykvist op. cit., sec.4.2; Toby Ord, ‘Why I’m Not a Negative Utilitarian’, 2013 <http://www.amirrorclear.net/academic/ideas/negative-utilitarianism/; Torbjörn Tännsjö, ‘Utilitarianism or prioritarianism?’, Utilitas 27, 2 (2015): 240–250. Strictly speaking, Heyd does not mention killing others as a path to ‘the painless annihilation of all humanity’; only collective suicide and abstention from procreation. Tännsjö appears to endorse R. N. Smart’s elimination argument against negative utilitarianism because he mentions it and shortly thereafter concludes that one should not give suffering lexical weight (pp. 243–4).
6 Dale Jamieson, ‘Utilitarianism and the Morality of Killing’, Philosophical Studies 45, 2 (1984): 209–21, at p. 218.
7 Arrhenius and Bykvist op. cit., p. 31.
8 Hedenius op. cit., pp. 45, 100–5; R. N. Smart op. cit.
9 David Pearce, ‘The pinprick argument’, 2005 <https://www.utilitarianism.com/pinprick-argument.html.
10 J. J. C. Smart writes, ‘A classical utilitarian could be a benevolent world exploder only if he or she were a pessimist who, like Schopenhauer, believed that sentient beings inevitably, or perhaps even for the most part, are more miserable than happy.’ J. J. C. Smart, ‘Negative Utilitarianism’ in F. D’Agostino and I. C. Jarvie (eds.) Freedom and Rationality (Reidel, 1989): 35–46, at p. 43. This statement is problematic, as in order for classical and traditional utilitarianism to imply that killing everyone would be right, currently existing beings need not be more miserable than happy, and nothing close to Schopenhauer’s pessimism is required.
11 Jamieson op. cit., p. 218; David Pearce, ‘Unsorted postings’, 2013, sec. On classical versus negative utilitarianism <https://www.hedweb.com/social-media/pre2014.html.
12 Pearce, ‘Unsorted Postings’ op. cit., sec. On classical versus negative utilitarianism.
13 Judith Jarvis Thomson, ‘Killing, letting die, and the trolley problem’, The Monist 59, 2 (1976): 204–17, at p. 206; H. J. McCloskey, ‘An Examination of Restricted Utilitarianism’, The Philosophical Review 66, 4 (1957): 466–85, at pp. 468–9; Tim Mulgan, Understanding Utilitarianism (Stocksfield: Acumen, 2007), pp. 94–5; Evelyn Pluhar, ‘Utilitarian killing, replacement, and rights’, Journal of Agricultural Ethics 3, 2 (1990): 147–71.
14 T. L. S. Sprigge, ‘A utilitarian reply to Dr. McCloskey’, Inquiry 8, 1–4 (1965): 264–91.
15 R. M. Hare, Moral Thinking: Its Levels, Method, and Point (Oxford: Clarendon, 1981), pp. 132–5.
16 Sprigge op. cit., pp. 275–8.
17 J. J. C. Smart, ‘An Outline of a System of Utilitarian Ethics’ op. cit., pp. 71–3.
18 Cf. Jakob Elster, ‘How Outlandish Can Imaginary Cases Be?’, Journal of Applied Philosophy 28, 3 (2011): 241–58, at p. 241.
19 Hare op. cit., pp. 134, 163–4.
20 Hare op. cit., pp. 133–4, 163–4.
21 C.f. Hare op. cit., p. 135.
22 See ‘Strategic Considerations for Moral Antinatalists’, Essays on Reducing Suffering <http://reducing-suffering.org/strategic-considerations-moral-antinatalists/>.
23 Magnus Vinding, Anti-Natalism and the Future of Suffering: Why Negative Utilitarians Should Not Aim for Extinction (Smashwords, 2015).
24 H. B. Acton and J. W. N. Watkins, ‘Symposium: Negative Utilitarianism’, Aristotelian Society Supplementary Volume 37 (1963): 83–114, at p. 96; J. J. C. Smart, ‘Negative Utilitarianism’ op. cit., p. 44.
25 E.g. Pearce, ‘Unsorted Postings’ op. cit., sec. On utilitronium shockwaves versus gradients of bliss. It says, ‘one might naively suppose that a negative utilitarian would welcome human extinction. But … only (trans)humans – or rather our potential superintelligent successors – are technically capable of assuming stewardship of our entire Hubble volume.’ Similar, earlier ideas can be found in David Pearce, ‘The hedonistic imperative’, 1995, chap. 4, objection no. 32 <http://www.hedweb.com/>.
26 Brian Tomasik, ‘Risks of Astronomical Future Suffering’ Foundational Research Institute, 2016, sec. What if human colonization is more humane than ET colonization? <https://foundational-research.org/risks-of-astronomical-future-suffering/>.
27 Brian Tomasik, ‘How would catastrophic risks affect prospects for compromise?’ Foundational Research Institute, 2017, sec. Might humans be replaced by other species? <https://foundational-research.org/how-would-catastrophic-risks-affect-prospects-for-compromise/>.
28 Brian Tomasik, ‘Lab universes: Creating infinite suffering’ Essays on Reducing Suffering, 2017 <http://reducing-suffering.org/lab-universes-creating-infinite-suffering/>.
29 E.g. Tomasik, ‘Risks of Astronomical Future Suffering’ op. cit.; Brian Tomasik, ‘Applied Welfare Biology and Why Wild-Animal Advocates Should Focus on Not Spreading Nature’ Essays on Reducing Suffering, 2016, sec. Summary <http://reducing-suffering.org/applied-welfare-biology-wild-animal-advocates-focus-spreading-nature/>; Tomasik, ‘Lab Universes’ op. cit.
30 David Pearce agrees (in emails to the author on June 6 and 8, 2017) that the immediate death of all sentient beings on Earth would result in less expected suffering.
31 Cf. Arrhenius and Bykvist op. cit., pp. 31–2.
32 See e.g. Nick Bostrom, ‘Astronomical waste: The opportunity cost of delayed technological development’, Utilitas 15, 3 (2003): 308–14.
33 David Pearce makes a similar point in Pearce, ‘Unsorted Postings’ op. cit., sec. On classical versus negative utilitarianism.
34 Pearce, ‘The Hedonistic Imperative’ op. cit.
35 Tomasik, ‘Risks of Astronomical Future Suffering’ op. cit., sec. Why we should remain cooperative.
36 I have formulated the argument based on ideas in Caspar Oesterheld, ‘Multiverse-wide Cooperation via Correlated Decision Making’ Foundational Research Institute, 2017 <https://foundational-research.org/multiverse-wide-cooperation-via-correlated-decision-making/>.
37 Brian Martin, ‘Critique of Nuclear Extinction’, Journal of Peace Research 19, 4 (1982): 287–300; Alan Robock, ‘Nuclear winter’, Wiley Interdisciplinary Reviews: Climate Change 1, 3 (2010): 418–27, at p. 424.
38 Desmond Ball, The Probabilities of on the Beach : Assessing ‘Armageddon Scenarios’ in the 21st Century (Canberra, A.C.T: Strategic and Defence Studies Centre, The Australian National University, 2006), p. 2; Edward Moore Geist, ‘Would Russia’s undersea “doomsday drone” carry a cobalt bomb?’, Bulletin of the Atomic Scientists 72, 4 (2016): 238–42, at pp. 239–41.
39 Joe Fiorill, ‘Top U.S. Disease Fighters Warn of New Engineered Pathogens but Call Bioweapons Doomsday Unlikely | Analysis | NTI’ Nuclear Threat Initiative, 2005 <http://www.nti.org/gsn/article/top-us-disease-fighters-warn-of-new-engineered-pathogens-but-call-bioweapons-doomsday-unlikely/>.
40 Center for Responsible Nanotechnology, Center for Responsible Nanotechnology, ‘Nanotechnology: Grey Goo is a Small Issue’ <http://crnano.org/BD-Goo.htm>.
41 Mike Treder and Chris Phoenix, ‘Nanotechnology and Future WMD’ Center for Responsible Nanotechnology, 2006 <http://crnano.org/Paper-FutureWMD.htm>.
42 Tomasik, ‘How Would Catastrophic Risks Affect Prospects for Compromise?’ op. cit., sec. Most catastrophic risks would not cause extinction.
43 Tomasik, ‘How Would Catastrophic Risks Affect Prospects for Compromise?’ op. cit.
44 E.g. Stephen M. Omohundro, ‘The nature of self-improving artificial intelligence’, 2008, sec. 6 <https://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf>.
45 Griffin op. cit., p.48; Thomas Hurka, ‘Asymmetries in value’, Noûs 44, 2 (2010): 199–223, at p. 200; Krister Bykvist, Utilitarianism: A Guide for the Perplexed (London ; New York: Bloomsbury Academic, 2010), p. 62.
46 I am grateful for comments on earlier versions of this paper from Lars Bergström, Erik Carlson, Ruairí Donnelly, Oscar Horta, Jens Johansson, Kaj Sotala, Johannes Treutlein, Torbjörn Tännsjö, and especially Krister Bykvist, Max Daniel, Brian Tomasik and Magnus Vinding. The paper has benefited from correspondence with Tobias Baumann, Dale Jamieson, Caspar Oesterheld and David Pearce. Thanks to Adrian Rorheim for copy editing. I did the early stages of the work on the paper when I was employed by the Foundational Research Institute. Views expressed are not necessarily those of the institute.