Blog Coordinator

« Further thoughts on explaining away retributivism | Main | Eddy Nahmias deserves credit... »



Feed You can follow this conversation by subscribing to the comment feed for this post.

Hi Neil, thanks for the interesting posts. I've really enjoyed reading and thinking about them.

As for Cushman's experiment you cite here, I suspect that the "surprising" outcome is due to the unspecified way in which the question was asked. Merely asking "How much prison Smith deserves" without specifying for murder or for attempted murder, the question would likely be understood differently in different contexts. In the case of Brown's death, subjects may assume the question is how much prison Smith deserves for the murder. But in the case of Brown's survival, subjects might assume the question is how much prison Smith deserves for the attempted murder.

Hi Neil,

Tania Lombrozo and I actually have some recent results that support your prediction (as do Jonathan Phillips and Alex Shaw). We're in between drafts right now and the results are a bit complicated to get into here, but we'd love to send you a more finished copy of the paper if you're interested.

Jiajun, I am not too worried by possible misinterpretation or ambiguity. Subjects were asked to imagine that they were on a jury evaluating charges against Brown. Moreover, this experiment is conceptual replication of another, in which no one dies (the harm in that experiment is someone burns her hand).

Dylan, Thanks - I'd like to see the paper when its ready. I was hoping to move toward gathering pilot data in the next couple of weeks. In the light of your paper and the other one you mention, is there any point? In other words, are your hypotheses sufficiently close and/or your data sufficiently illuminating of the question I am asking to make further experimentation pointless?

Neil--you always offer tough challenges.

First, I think respondents to the Smith scenario blame Brown for his death because he ought to have been more vigilant about his diet. The fact that Smith tried to poison Brown by mistakenly thinking his addition to the salad was lethal is trumped by Brown's cavalier attitude about potential deadly but easily-encountered harms. Smith can't be responsible for Brown's negligence about his own diet, and I think that drives intuitions away from the attempt of harm by Smith.

Second, in a minor paper I published some years ago, I offered the following scenario (enhanced quite a bit to try and meet your challenge). Say that a casino has decided that a certain high-stakes customer has recently taken too many profits, and arranges it so the gambler's favorite game--roulette--is fixed so that the wheel (built with sophisticated electronics) can sense when the ball is about to settle into a winning slot and subtly vibrates so that the win cannot occur. The casino--the house--only switches this on with this one "too lucky" gambler. They do so--but as it turns out the gambler loses his high stakes wagers only by a streak of very bad luck, and the device never has to introduce the "cheat". In fact he gambles so much that puts himself in financially dire circumstances, and as a result quits gambling completely (this is to show that we are not dealing with an addict, but just an otherwise rational "gambler's fallacy" estimator of risk-reward based on a long lucky winning streak.).

On the one hand it seems that the house has conspired against the gambler by arranging it so the gambler must lose in any case. But in the actual situation the gambler--knowing that the odds are against him but carrying on nonetheless--loses only because of those bad odds, and not because of the prearranged possible "cheat". So is the gambler responsible for his losses that occurred under (as it turned out) fair circumstances ("fair" defined as house-favored odds that a rational gambler such as our one here understands)? Or does the house-conspiracy entail that one judges that the gambler couldn't win, isn't responsible for his heavy losses, and deserves reparations against even only the counterfactual use of the cheat?

Most FS manipulator examples rely on some element of luck that the proximate agent acts controllably and according to the wishes of the potential manipulator. My example does that, because the house manipulation allows for chance to intervene consistently with its goal. But chance plays another role here not found in FS examples. That is that chance (in a different sense) is something the proximate agent also allows is a factor, though controllably misjudges its impact and thus loses too much money.

Our gambler actually stands in a normal (non-addictive) gambler's situation--he loses, knows why, and even accepts blame to reform himself and quit such foolishness.

The house though gets away with intentional conspiracy to defraud, as one might say who knew the full picture.

In some ways this example is a kind of moral Gettier--the gambler by all ordinary criteria knows why he is responsible, and we acknowledge that all such JTB moral criteria are met. But his bad luck just happened to favor the good luck of the house and block the actual use of the cheat.

I'd argue that PAP isn't exonerated here either, since the gambler loses for reasons even he fully understands and accepts, and so the actual sequence of events of harm, and his own mental states, are sufficient to attribute him at least partial blame. But intuitively the house can't escape some responsibility either--I'd say.

Is this the kind of link break you seek? Even if not, I thought in any case you'd appreciate its double-reliance on luck!

Alan, thanks. Your case is really interesting. I would want to run it on its own and with controls to see what ordinary people's intuitions are (I am no longer at all confident of my own intuitions). One thing I suspect is that intuitions can quite easily be driven by saliency effects: do we attend to the fact that the device was not triggered, or to the fact that the agent could not win? This might explain work showing that personality has an influence on fw views: personality factors make different things salient. You also raise the issue that blame can be distributed across agents. In a manipulation case, we may get different results if we give subjects the chance to respond to questions about the manipulator and the manipulated, as opposed to just the manipulated. Subjects may be more willing to entirely exonerate the manipulated agent because they haven't had a chance to say that the manipulated agent deserves some blame (too).


Interesting post. I take it that your suggestion would help to reply to the incompatbilist you envisage. I like the general strategy: to argue that something special or particular to the manipulation scenario explains the intuition of lack of responsibility, rather than causal determination per se. That is, it is a special kind of causation, not causation per se.

My preferred explanation is that in the manipulation scenarios, we are implicitly assuming that the kind of tampering with the brain would involve a non-reasons-responsive process. Alternatively, I suppose that if the process were genuinetly r-r, it would not be the agent's *own*. So either way there would not be the kind of control that underwrites moral responibility: guidance control

I prefer this kind of explanation to your (very interesting) suggestion because quite apart from how people actually reason, they *shouldn't* reason in the way you suggest. That is, the lack of responsibility does not, upon reflection, come from the fact that there is a morally responsible agent initiating the intervention/manipulation. It *shouldn't* matter whether the agent is morally responsible or a child or insane (or an insane child--I think I had one of those).

So, although the compatibilist will welcome provocative and intriguing suggestions of the sort you offer, I don't think we should opt for this one. Upon reflection, there is no difference between manipulation by another agent and *the same sort of physical process* initiated by no agent at all (or no responsible agent).

The key to responding to the manipulation argument, in my view, is focussing on the *kind* of process, perhaps physical, involved. Sometimes it rules out r-r, and sometimes it would be consistent with r-r, and my suggestion is that tracks, or *should track* our moral responsibility judgments.

A methodological point: I don't place too much weight on the untutored intuitions of "the folk".

Good to hear from you, John. In general, I agree with you: I don't place too much weight on folk views. I expect the folk to be confused! (some of them are even more confused than I am). Like Chandra, though, I don't believe we can identify the factors that drive our intuitions from the armchair, so while I don't think of myself as an experimental philosopher (I am an experimentalist and also a philosopher, but very little of my experimental work probes philosophical questions) I think this may be a case in which experimental work might be illuminating. The claim is that when a philosopher encounters a manipulation case, she may conclude that the manipulated agent is not responsible due to the mechanism identified.

I am guessing, though, that this explanation will explain, at most, part of the variance in intuitions, because I think something along the lines you suggest is right: we ought to think that agents in these cases either are not fully r-r, or that the manipulation is not responsibility undermining. Chandra's own experimental work shows that ordinary people indeed take manipulation either not to responsibility undermining, or to corrupt their deep selves, or to undermine r-r.

The final aim, of course, is to defend compatibilism so that we can all be skeptics for the *right* reasons.

Lack of responsibility here means that the manipulated agent is unable to 'subvert' the will of his manipulator, that he must will in accordance with his plans. To be free of such a pernicious influence, an agent must meet the following conditions:

Necessarily, a person P can subvert the will of an influence P* iff there are goals g and g* such that i) g was intentionally instilled in P by P*, ii) g* is inconsistent with g, iii) P understands what it would mean to adopt g*, iv) P can discern reasons for adopting g*, v) P could become motivated by those reasons to adopt g*, and vi) P* does not intend that P adopt g*

This definition is intended to distinguish a manipulated agent from one who is simply being 'naturally determined': influenced, but not controlled by others, shaped genetically and environmentally by 'blind' forces. It comes up short of LFW, which entails not only freedom from others, but self-control, but does protect Compatibilism from Pereboom's and Kapitan's counterexamples.


Deliberative beings would pose a much greater threat to compatibilist freedom, fragile as it already is. It should matter to me whether or not I'm being controlled by another person or just the victim of some misfortune. If I'm merely the latter there's the chance of a return to normalcy: maybe the scientists will find out what's wrong with me and know a cure. The other way may be difficult if not impossible to even detect.

Camus has this line towards the end of the Stranger I think apropos. He speaks of the 'benign indifference of the universe.' Should it simply befall me to become unresponsive to reason or lose my grip on my deep self, I would be much less resentful, much more easily resigned to my fate, than if I suspected that another person was the source of my woes.

We're invited to consider cases like those involving manipulation, and we often make judgments about the agents' responsibility in them. If we want then to see whether the judgments are defensible, one way to proceed is by dialectic, roughly of the sort that we find in Plato's dialogues. We ask each other probing questions and reflect on the matter. Perhaps we could instead gather the judgments of large numbers of subjects who (we've made sure) haven't engaged in any such critical reflection and submit that data to sophisticated statistical analysis. This second approach might tell us something interesting. But I don't see that it's a good way of getting at the question of whether the agents in the cases in question are responsible.

Our experiments varied a number of different mental states in the manipulator and manipulee, but the most relevant studies varied whether they had various intentions or not. Those studies suggest that different ways of breaking the link between the manipulator’s intentions and the outcome gets manipulees’ off the hook (to varying degrees). Is that the sort of link-breaking you had in mind? Jonathan and Alex use cases of deviant causation and we use cases, e.g., where the manipulator intends the manipulee’s action (that brings about the outcome), but not necessarily the outcome itself. Those two ways of severing the “intentional link” between the manipulator and the outcome (while holding fixed the causal relation in the actual world) are the ones that come to mind most easily for me, but there are probably others and I hope there’s more work to come on all of this. At any rate, we’ll send the paper along soon (hopefully in the next couple weeks) and you can judge for yourself!

Tania and I think varying the “intentional link” between the manipulator and the outcome has effects on responsibility and free will attributions to the manipulee because it varies the counterfactual link between the latter two. Plausibly, a manipulee’s responsibility for an outcome depends on her control over it, and that control may in part depend on her counterfactual link to it… Robert raises a similar point, and pace John, I think it does matter whether the manipulator is a child or insane - not because the manipulator’s responsibility directly affects the manipulee’s - but because being insane probably affects the intentional and thereby counterfactual link between the manipulee and the outcome. The sane manipulator likely manipulates the manipulee in nearby counterfactual worlds where the insane manipulator or one who has random desires would not.

Another interesting question in the vicinity: whether being causally influenced (in the same way, even counterfactually) by a human vs. a non-human factor (like an inanimate object, animal, or robot) can make a difference to responsibility and free will. We didn’t find evidence in our studies for that; instead, the differences seem to be due to intentions (human “manipulators” who act unintentionally seem to be just as (non-)threatening as non-human “manipulators”). But there is some philosophical precedent for pursuing the idea. Pettit and (some) others in the Republican tradition think that simply being under the causal influence of a non-person vs. another person (even one who doesn’t control you intentionally!) determines whether you’re “dominated” and hence have political liberty or not. Political freedom’s a different beast, of course, but related. The only person I know who’s pushed a similar line about free will is C. P. Ragland in "Softening Fischer’s Hard Compatibilism," whose discussion is pretty interesting.

Even if it doesn’t make a difference to moral responsibility and free will, I do think causal influence by an agent vs. non-agent can make a difference to other notions in the vicinity, like authorship. Consider two cases. In both, Professor Peach happens to discover a “poem” in the sand while strolling along the beach that turns out to be, not only extremely clever and moving, but also publishable. In Case 1, the “poem” was just the unlikely product of the tides, the wind, a seagull, whatever. But in Case 2, it was put there by another person, Miss Pear. We can even suppose the “poem’s” inscription was pretty unintentional on Pear’s part – perhaps she dreamed and wrote it down while asleep (whether or not there’s a “sleep-writing” analogue to “sleep-walking,” you get the idea…) My intuitions say that Peach pretty clearly deserves more of whatever type of praise we accord to artistic creativity (one type of authorship) in Case 1 compared to Case 2. And it seems like the difference there is just a matter of whether or not Peach is the “ultimate creative source” of the published poem or not – i.e., whether she’s the *first* person in the causal chain whose head it occurred in or not.

Randy, I don't want to know whether subjects in these cases are responsible (I already have an answer to that question, after all). I'm interested in what causes the intuition that they are or are not. There is lots of evidence that no one is good at introspecting this kind of thing, and we know why. One reason is that when factors mediate judgments, it is extremely hard to get at them by thought experiment method (that's why Chandra does structural equation modelling). Another, which of course is very familiar, is that there are order effects: having committed yourself to the judgment that an agent is responsible in a case, there is some pressure to think that an agent in case that seems relevantly similar must also be responsible. Can we avoid this? Sometimes, maybe, but one way in which this pressure may work is by suffusing judgment or intuition, so that we see the case in a way that is influenced by our prior expectations (again, this is a familiar point). A third problem is that we know from the experimental work that people's judgements are sensitive to the expectations of their interlocutors, and this is not introspectible. Finally, sometimes actual judgments are quite counterintuitive. Even behaviour can be quite bizarre. Elsewhere, for instance, I have argued against John's claim that "reactivity is all of a piece" by noting that that claim commits us to the following: if an agent will forgo the opportunity to X for a payment of $10, that agent must value x-ing at less than $10, and that the agent will therefore not pay more than $10 for the opportunity to x. This claim is false: agents who will forgo the opportunity to x for $10 have been shown to be willing to pay much more than $10 to have the opportunity to x (here 'xing' is 'taking cocaine). I doubt we could ever discover that without leaving the armchair.

I think we ought to avoid asking the folk about their theories, because I expect their theories to be confused. But if we're going to engage in intuition mongering we should do experimental work as well as (not instead of) traditional philosophy, because we cannot expect to be able to identify the factors our intuitions are sensitive to except by experimental work.

I think it's a mistake to see that philosophy is in the business of mongering intuitions.

Can you say more, Randy? What does the mistake consist in? It might be (a) a mistake for philosophers to make claims that turn crucially on intuitions (for instance, to argue that the best explanation for our intuitions in manipulation cases is that the manipulated agent lacks a kind of control inconsistent with determinism) or (b) a mistake to think that when philosophers use this kind of argument, anything much turns on intuitions. I suspect you mean the latter, in which case again I would refer to cognitive science (in part) in response. The normal route to judgment when considering counterfactuals is roughly as follows: 1. it seems to me that x. 2. not defeating conditions for my seeming are salient. 3. Therefore X. The reflective route to judgment is exactly the same, with a bit more searching for defeating conditions. I identify 'intuitions' with the intellectual seemings cited in 1; so I see no way to avoid some kind of reliance on intuitions if we are going to use thought experiments.

I see the zygote argument as follows:

1. Ernie isn't responsible for what he does.

2. As far as responsibility is concerned, there's no relevant difference between Erie and ordinary human agents, if determinism is true.

3. Hence, ordinary human agents aren't responsible for what they do, if determinism is true.

There's no premise that says anything about intellectual seemings. And I wouldn't think that an appeal to intellectual seemings is the way to back up premise one, if someone disputes it (as, of course, many do). I'd think what's needed is some answer to the question: why isn't Ernie responsible for what he does?

Can you say more about your view of things? Earlier I took you to reject the idea that investigating intuition helps answer the question of whether Ernie is responsible for what he does. Did I misunderstand? Or, if that was right, how do you think we settle the question about Ernie?

I gather you take it that no one is responsible for anything. And you have an argument for that which employs some general principles as premises. But now, if someone disputes one or more of these premises (as we can expect will happen), do you think the way to defend it is by way of studies of intuition? Or would you instead seek an answer to the question: why is that so?

The intuition - the intellectual seeming - is the primary evidence for premise one in your reconstruction of the zygote argument. We believe premise one because consideration of the case generates the judgment in us that Ernie is not responsible. Some experimental philosophers seem to think that the intuition has no evidentiary value. I do not. I think it is highly defensible evidence, and among the possible defeaters are facts that can be revealed by careful experimentation. Generally speaking, if someone were to disagree about premise one - by saying they do not share the intuition - I ask them for an explanation of why I have it. This is, of course, perfectly normal in philosophy. Of course I wouldn't try to convince them they should have it by appeal to the case, unless I thought they were not considering it properly. But that's because I am assuming they are considering it (I might, however, try to construct a case I thought more powerfully generated the intuition than the original one).

In my book, I used no thought experiments to advance my main arguments. I did use thought experiments, in one chapter only, with the aim of showing that the intuitions generated by Frankfurt-style cases are unreliable (the chapter is a slightly revised version of my J Phil paper). I think it is entirely appropriate to respond to my thought experiments with arguments or with data.

My new book uses no thought experiments - it is an exercise in naturalistic philosophy. Whereas experimental philosophy produces data, naturalistic philosophy is theory construction on the basis of science (philosophy in the vein of Carruthers or Dennett on consciousness). I do all three ways doing philosophy (and more besides). All are legitimate and none are isolated from one another (an argument advanced in one way can be refuted in another).

If someone disagrees with me about premise 1, I wouldn't think of asking them your question (why premise one seems as it does to me). I don't take the issue to be about me, but about Ernie. Whether he's responsible or not has nothing to do with me.

We employ judgment not just about cases. We judge general principles, we judge that one thing follows (or doesn't follow) from another. We can ask about the epistemic standing of both of these kinds of judgment as well. Do you have a hypothesis about why it's judgments about cases in particular that generate the excitement about intuition? Perhaps because there's more disagreement here?

It is standard practice to think that the person who argues against a view supported by strong intuitions owes us not only an argument for their view, but an explanation for why the intuition is widely held. Google search returns 13,800 results for the phrase "explain away the intuition"; since that seems an unlikely form of words outside philosophy, I think this is evidence for my claim about philosophy (that's an instance of data-driven philosophy!)

The second question is an interest and difficult one. At least its difficult for me. It may be that the general principles we utilise are closer to being paraphrasable into claims in first-order logic, which we have good (convergent) reasons to think is generally very reliable. Or the kind of general principles we use can be corroborated independently - eg, by pragmatic success. I don't think all general principles are immune to correction. Probabilistic reasoning, notoriously, often goes astray because of our tendency to ignore base rates. But commonsense reasoning seems to track truth pretty well (and even when it goes wrong, it goes wrong explicably, where the explanation is that having an off-track bias like that is actually the best way to track truth under typically prevailing conditions).

I usually head out for a beer when that starts.

You're buying!


Here's an example of something I asked earlier. You advance an argument that employs this premise: "It is only reasonable to demand that someone perform an action if performing that action is something they can means of a reasoning procedure that operates over their beliefs and desires." Suppose the premise is disputed. Would you think the way to proceed is: (1) run surveys to see what factors influence people's intuitive judgments regarding this claim; (2) discuss whether the claim is true or false, seeking arguments for or against? Both?

Usually at the bar we just do (2).

Thanks, Neil, for the interesting posts! One aspect of the Cushman experiments as you describe them that caught my eye is that they ask about deserved prison time. We have well-known legal punishments that depend on outcome, for better or for worse (although see the Model Penal Code!). I find Jiajun’s hypothesis interesting on this point--when there is harm, we find the charge of murder salient; when there is not, we find the charge of attempted murder salient. And given that the context is legal and that there is no further dialogue (see Randy's posts), subjects might stop there. In general, asking about deserved prison time and asking how blameworthy Smith is might generate different results.

Hi Randy,

You say: "I wouldn't think that an appeal to intellectual seemings is the way to back up premise one, if someone disputes it (as, of course, many do). I'd think what's needed is some answer to the question: why isn't Ernie responsible for what he does?"

Can you give me a sense of how that conversation would go? Wouldn't it almost immediately come down to a restatement of the descriptive facts about the case and then a clash of intuitions (or "intellectual seemings")

You might say that the defender of the zygote argument could appeal to a general principle as the reason that Ernie isn't responsible. But isn't the goal of the zygote argument to establish the relevant general principle?

For an example of an argument for premise one, see Patrick Todd's recent article on this issue. Of course, one might challenge one or another premise of Todd's argument. But one can give arguments for one's challenge!

Following up on Randy's comment, here's the information on Patrick Todd's article:

"Defending (a modified version of) the Zygote Argument", Phil. Studies May 2013, pp. 189-202.

This is a really interesting article (as are Patrick Todd's other articles on manipulation). He is on the frontier of defending such arguments as providing decisive reason to reject compatibilism. Although I myself find Patrick's work on these (and other) topics to be really provocative, challenging, and well, genuinely bold and original, I am (unsurprisingly) not entirely convinced, esp. of Premise 1. But I join Randy in commending it. Anyone thinking about initial design and manipulation arguments should read Patrick's work! (By the way, I think initial design and manipulation arguments should be distinguished--sometimes issues can be conflated unless these sorts of arguments are indeed distinguished.)

And I reiterate my point, perhaps not relevant any more, that I don't really care that much what the "folk" would say about a range of examples. I care more about what they *should* say. And the mere difference between an agent and a natural cause at the beginning of a causal sequence should not make a difference w.r.t. the moral responsibility of the manipulated (or designed) agent, as far as I can see. That folks would say something different is neither here nor there, in my view.

Hey: lots of folks will say that we need alternative possibilities for moral responsibility, and that is OBVIOUSLY false, right??? [Don't answer that question, and don't ask Patrick Todd...]

Now of course I go one way whereas Patrick (and perhaps Randy and others) go the other: I think moral responsibility in both scenarios (suitably filled in--where there's ownership and reasons-responsiveness, and so forth), whereas Patrick and others will say: no moral responsibility in either scenario. And Al, and perhaps others, are agnostic about both kinds of scenarios. So there is a range of options here (ha--don't think I like that). But what I think is a bit nuts is to say: moral responsibility in these cases is ruled out by initial design or manipulation by an agent, but not by an exactly similar causal sequence except without an agent's initiating it. And why care what the folk say about this, if it is (arguably) nuts, well, more charitably, if it is (arguably) not sustainable through rational reflection?

The comments to this entry are closed.

Books about Agency

3QD Prize 2014: Marcus Arvan