Moral Responsibility for Outcomes in Collective Harm Cases


Most would agree that you could be blameworthy for causing climate change if you go for a leisure drive with a gas-guzzling car. Still, how can we explain this intuition given that your drive seems to make no morally relevant difference for climate change and its related harms? More generally, how can we explain that you might be morally responsible for an outcome in collective harm cases? These are cases where there will be a bad outcome, X, if enough acts of a certain kind, A, are performed, but where no act of A-ing makes outcome X worse. To answer this question, we assume that you are blameworthy for an outcome just in case your poor quality of will caused this outcome. More precisely, we assume that you are blameworthy for X rather than X* just in case (i) X is worse than X*, and (ii) there is a time t, such that your poor quality of will at t in relation to X versus X* caused X rather than X*. Following authors such as Strawson (1962), McKenna (2012) and Björnsson (2014), we take having a poor quality of will to involve something like showing insufficient regard or to care insufficiently for something (e.g. an outcome). The problem is that most accounts of causation faces counterexamples when combined with the idea that you are blameworthy for a bad outcome just in case your poor quality of will causes this outcome. They might for instance entail that a single drive with a gasguzzling car does not cause climate change, and so that you cannot be blameworthy for the harms of climate change when going for such a drive. We consider different landmark accounts of causation and explain why they cannot be the accounts we are looking for. By using an alternative account of causation that one of us have developed elsewhere, we can avoid the difficulties the other accounts faces. Roughly, this account says that your poor quality of will caused an outcome X just in case (a) your poor quality of will is process-connected to X, and (b) in the closest-to-@-at-t world where you do not have a poor quality of will, X is less secure, and X* is more secure than they are in @. Importantly, this account gives the right verdict in collective harm cases. Since it is disputed whether climate change is a threshold-phenomenon (but cf. Broome 2019), and more generally since collective harm cases comes in two varieties – threshold cases and non-threshold cases – we show that our account of blameworthiness gives the intuitively correct result in threshold cases (such as Björnsson’s 2014 The lake) as well as in non-threshold cases (such as Parfit’s 1984 Drops of Water).


6 thoughts on “Moral Responsibility for Outcomes in Collective Harm Cases

  1. Hindriks, Frank says:


    Dear Mattias,

    As I understand it, you rely on causal contribution as the causal criterion for blameworthiness. And you combine this with a requirement of a bad quality of will. I wonder how these two aspects combine. The underlying worry is that the account is too permissive and supports blaming people who should not be blamed.

    Does it make a difference, on your account, whether the harm can be avoided? Suppose people have announced a corona party. I follow it by looking at a webcam. And I see that at some point there are so many people that things will go terribly wrong if anyone carries the virus. At that point, the harm is done. So, I do not harm anybody by joining the party at that point, which suggests that I am not to blame either. Can your account allow for this? Or does it entail that I must have a bad quality of will because I knowingly participate in a harmful event?

    Best wishes,

    1. Gunnemyr, Mattias says:

      Dear Frank,

      The short answer is yes, it does make a difference whether there is a possibility that harm can be avoided or not. The explanation is that on account of causation we use, causation is relativized to a possibility horizon, which includes roughly the relevant possible worlds at the time. So, if there is no relevant possibility at the time at which you join the party (caring less than required about whether you contribute to corona spreading or not) that harm can be avoided, you are not causally connected to harm.

      Still, one should always be careful when deciding which possibilities that are open or not. In a case like Parfit’s harmless torturers, for instance, each torturer could defend himself saying something like “given that all the other torturers did what they did, there was not possibility that the victim would not end up in excruciating pain”. If each torturer make this kind of defense, we’ll end up in a situation where it seems that no-one is blameworthy for the victim’s being in excruciating pain. However, we think this conclusion is mistaken. For one thing, intuitively it seems that the torturers (each, all, some?) are to blame for the victim’s being in excruciating pain. The problem, is that the defense “given that all…” is mistaken. When making this kind of defense, each torturer is basically saying that we should treat what all the others do as fixed – as a background condition. These defenses do not fit together. Each is saying that there was a possibility that he could have done otherwise, while there is no possibility that any of the others could have acted otherwise. To establish whether the torturers are (individually) blameworthy, we have to treat it as an open possibility that each could have had the required quality of will in relation to the victim. When we do, we see that there was a possibility that the victim would not end up in excruciating pain, namely if everyone (or enough of them) had had the required quality of will. There are also other problems with this kind of solipsistic defense, but this might be the most important one.

      So… one should be careful when making statements about what is possible or not. However, it does not seem to me that anything like this is going on in the corona-case. So, don’t worry, you’re not blameworthy for corona spreading (in this hypothetical scenario).

      Sorry for the long answer. I got a bit carried away. Best/M

  2. Peet, Andrew says:

    Hi, here is what I was going to say had we more time:

    If I understand correctly (and I may well have missed something) in response to the case presented by the last commenter you suggested that the person’s ill will did not cause the relevant misfortune. But I think we can generate cases where the subject’s ill will clearly is a cause, and yet they are still not blame worthy. We can do so using fairly standard deviant causal chain cases. Take your chilli case again. Suppose that Suzy has a poor quality of will with respect to the quality of the chilli. She doesn’t care how it turns out. But this wouldn’t normally matter, since she is not responsible for cooking the chilli. Billy is, and Billy is being very attentive. Nonetheless, a genie decides to ruin the chilli if Suzy’s quality of will toward the chilli is poor (if, for example, she is indifferent towards it). Surely Suzy is not blameworthy here, but her poor quality of will is clearly a cause of the poor quality of the chilli.

    Maybe I missed something in your response that covered this objection.

    Anyway, very interesting talk.

    1. Gunnemyr, Mattias says:

      Thanks! Yes, I see your point. Good case. No, you didn’t miss anything. I did not have time to cover those kinds of cases in the talk. Basically, in the full lenght paper, we have a condition that say that the causal chain cannot be deviant. I am still working on exactly how to phrase this condition. It will probably say, roughly, that 1) the action or omission your are blameworthy for, or that caused the outcome you are blameworthy for, must follow from a normal reasoning procedure. You saw/did not saw the reasons involved in the situation, you used/didn’t use them as inputs in your reasoning, and your action/omission followed from this. Your reasoning or bodily movement was for instance not hi-jacked. Moreover, 2) if you are blameworthy for an outcome, your action/omission caused this outcome more or less as you anticipated. But there are still more work to do here.

  3. Wilby, Michael says:

    Hi Mattias

    I enjoyed your talk and the discussion afterwards too.

    Time ran out before I had the chance to ask my question: I was wondering what you would say about omissions of supererogatory acts (perhaps a bit like the Queen of Sweden example you give). For example – one might be able to offset aspects of pollution and environmental degradation by regularly planting trees. If you don’t plant a tree, then you are process-connected to the environmental degradation (if I understood what you wanted to say early on about omissions).

    But, does that mean that someone who lacks quality of will (e.g. they couldn’t care less about the environment — although they don’t directly pollute because they don’t like travelling, say)) is thereby blameworthy for environmental pollution if they don’t plant a tree? If not, what is the difference between such a person and a person who contributes to pollution through driving?

    1. Gunnemyr, Mattias says:

      Hi Michael,

      Good question. I haven’t thought about omissions of supererogatory acts in a collective harm setting, but clearly, I should have. From the top of my head, I would say that this person’s having a poor quality of will in relation to the environment (as opposed to the required one) does not cause environmental degradation. The reason is that you are not required to care about the environment to the extent that you reduce the amount of greenhouse gases in the atmosphere by e.g. planting trees, in addition to not traveling, etc. However, I need to think more about this. Thanks.

Leave a Reply