Blog Coordinator

« New Blog for Undergraduates | Main | Libertarian Compatibilism and the Mind, Luck, Assimilation, Rollback and Disappearing Agent Arguments »

08/24/2014

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Hi Joshua (if I may)--

This is indeed very interesting, and thanks for the post.

I've had Neil's Hard Luck on my mind a lot lately, and it led me to think this:

Does some implied/perceived efficacy of the manipulation in the first scenario ("had known this would happen all along") diminish the inclination to think that luck played any role in it, but that in the second ("had not known this would happen all along and it wasn't a part of their plan") there is plenty of room to draw the conclusion that luck had a large role in the outcome? Being "pleased" in the first expresses satisfaction in being successful in controlling the outcome, where "dismayed" in the second tokens some lack of control due to unlucky circumstances. So maybe careful readers import elements of luck into the second that are harder to see (at least) in the first. FWIW.

Josh, my guess here is that moral responsibility evolved to enforce a deterrent function. (If moral responsibility had no influence on the future, then I don't see how it could have evolved.)

Punishing people for things that they did not intend does not help that deterrent function, or does not help it as well as punishing people who do intend their outcomes. If it's an accident, then punishing the people might not deter them (because it might not be clear how they caused the outcome or could have avoided it). Moreover, even if it enforces a deterrent function, it might do so at a cost that is greater than when the agents intended their outcomes. For example, punishing people for their accidental outcomes might cause trigger fairness norms and social retaliation against the punisher. In all of these kinds of ways, the asymmetry could have evolved.

Interesting! Here's a quick thought, just having read this blog post.

In lots of cases, I don't think we'd be inclined to judge that the fact that X did precisely what Y intended him to do is at all exculpatory for X. Here's a case (from my Phil Imprint paper). Suppose that Bob's house has been getting burglarized, and he suspects (for good reason) that Fred is the culprit. One day Bob invites Fred over, and leaves an expensive item out in the open, where he suspects that Fred will see it and likely steal it; if he does, then Bob will have caught Fred's thievery on tape. Sure enough, Fred comes over, sees the item, tries to steal it, and subsequently gets caught by Bob. In this case, Fred does precisely what Bob intended him to do. Yet it seems clear that, other things being equal, Fred may be perfectly criticizable for what he's done; indeed, it seems even that *Bob* could criticize him for what he's done. (If Fred said to Bob, "But I've only done just what you intended!", I don't think we'd grant that Fred has much of point.)

Why does this case not generate the feeling that Fred isn't criticizable? I think this is (probably) at least in part because nothing about the story generates the thought that Fred's action was inevitable, given Bob's actions. Sure, Fred did just what Bob intended for him to do, but that was entirely his choice, and it was perfectly well within his power to have chosen and done otherwise. Nothing about the scenario, in itself, indicates that Fred (in some relevant sense) had to do what he in fact did.

Now compare this to the first story, according to which "The government had known this would happen all along and it was exactly what they planned. They were pleased that everything worked just as they knew it would." This sort of thought creates the impression of inevitability; if the government *knew* this would happen, then, lacking a crystal ball (and not being God), they knew it on the basis of knowledge of the causal conditions they had put in place, together with knowing how things work (i.e., the laws).

My off the cuff thought, then, is that people's intuitions here may have less to do with "what was in the manipulator's mind" than with whether or not the target's actions were rendered inevitable by the manipulator's actions. And that thought is certainly more friendly to an incompatibilist diagnosis.

All this to say: I'd like to echo what V. Alan White said.

Is it that those who have the intuition factor-in a utilitarian view by considering the consequences of the manipulation, whereas those who don't have the intuition are considering only the moral transgression itself, defined by the manipulators' motives?

Hi Joshua,

Thanks for posting this. It's very interesting. Here's an explanation of what's going on. I'd be interested to know what you think about it.

When the workers do what the manipulators intend, it gives the impression that they couldn't help it, and thus that they aren't responsible. (Is that the impression they're supposed to get? The story isn't clear on the details.) But when they don't do what the manipulators intend, it gives the impression that their behavior was, to some extent at least, still up to them, and thus that they can be responsible for it.

I'd be interested to see results from a case in which it is clear that the manipulator's activity guarantees some intended result (Bob's A-ing, say), and another where it's still clear that the manipulator's activity guarantees a result, though it's not the result the manipulator intended. Consider, e.g., a case in which the neuroscientists press buttons to bring about certain mental states, which in turn produce certain behaviors. In case A, the neuroscientists want Bob to punch Chris, so they press the red button, believing this will result in Bob punching Chris. It works! In case B, everything's the same, except that the neuroscientists mistakenly believe that pressing the green button will result in Bob punching Chris. In fact, however, pressing the green button will result in Bob hugging Chris. They press the green button, intending to make Bob punch Chris, and to their chagrin he hugs Chris instead. I'd be surprised if subjects made widely different judgments about case A and case B. In particular I'd be surprised if they said that Bob did not act freely and is not morally responsible for punching Chris in case A, but that he did act freely and that he is morally responsible for hugging Chris in case B.

Hi Joshua,

Having not read the paper, perhaps this is covered there. But this looks like a case of ordinary manipulation, whereby someone "uses" another to meet their ends. Iago feeds Othello false information, etc. But such garden-variety manipulation doesn't fund an argument for incompatibilism.

Even if people's intuitions are affected by manipulator intuitions in these cases, then, the cases themselves don't seem to be of the right sort to figure into manipulation arguments.

Secondly, even if the results were replicable with the right sorts of cases, they wouldn't be generalizable in the right way for the argument to proceed. For the universe doesn't have any intentions, and should determinism be true, this would be of no use to an argument that relied about the intentions of manipulators to show that responsibility is negated.

Finally, John Fischer's "The Zygote Argument Remixed" argues, convincingly to my mind, that the intentions of manipulators should have no bearing on others' responsibility.

Just my initial thoughts...

Thanks for these thoughts! A few of you raised a set of related points around the idea that the critical difference between the two scenarios is that when the workers' actions corresponded to the government's intentions, the workers' actions seem more inevitable (and there seems to be less luck involved, and it seems more that they couldn't help but do what they did). Taken as a group, I actually think that these suggestions bear a family resemblance to the conclusion that we ended up drawing in the paper.
In a few studies that varied the manipulator's intentions, we also asked participants a number of questions about the manipulator (in addition to asking how morally responsible the manipulated agent was). In particular, we tested whether there were differences between the two cases in the extent to which the manipulator was perceived as causing/making the agent do the immoral action. As you might be able to guess, participants were much more inclined to indicate that the government made the workers attack the village when the workers' actions corresponded the government's intentions. Then, we additionally used a statistical technique called mediation to show that this difference in the perception of the manipulator really helped to explain the reduction in blame for the manipulated agent.
Based on this and a couple of other studies, we ended up arguing that the reason the manipulator's intentions matter for moral responsibility is because they change the extent to which the manipulated agent is seen as controlled, and while we didn't ask participants directly about inevitability (or luck, etc.), I think these results really suggest that participants would also see the manipulated agent's actions as being more inevitable when those actions correspond to the manipulator's intentions.

Hi Matt,

Everything you say here sounds very reasonable, but in my view, it only adds to the philosophical importance of the experimental results. Suppose we start out with the view:

(1) The intentions of the manipulator have no bearing on the degree to which the agent is morally responsible.

Now suppose we learn a surprising empirical fact:

(2) Given the way human cognition works, people’s intuitions about whether the agent is morally responsible are influenced by their perceptions of the manipulator’s intentions.

Together, these two claims would give us reason to think that our intuitions about manipulation cases were systematically mistaken. This would then to some degree undermine a crucial premise in the manipulation argument for incompatibilism.

Justin,

This is a really interesting suggestion, definitely very much worth trying.

Jonathan makes some helpful remarks on your point in his comment above, but I just wanted to quickly add one further thought. Suppose you find that intention doesn't make any difference at all in cases where the situation is set up in such a way that the behavior feels absolutely inevitable. (For example, suppose the intention makes no difference in cases where the manipulative neuroscientists are controlling the agent's brain continually, through every step of the process.) It might be, then, that what intention is doing in more ordinary cases of manipulation is making people perceive those cases as similar to cases of complete inevitability.

In other words, consider a case in which the manipulator only affects certain events in the agent's early childhood, and everything then plays out in a more ordinary way, with the agent making certain decisions, which lead her into new situations, which prompt further decisions... Even if the case is specified to be deterministic, people might not regard the agent's final behavior as inevitable in the relevant sense. However, if the manipulator is said to have intended the behavior from the beginning, people might begin to see it differently, regarding it as a genuine inevitability.

(The relevant notion of 'inevitable' here is a little bit hard to spell out, but I am assuming that if the universe is deterministic but people are nonetheless able to think things over and make decisions, there can be some sense in which their behaviors are not regarded as inevitable.)

Patrick,

Nice example! That really brings out very beautifully the ways in which something quite specific has to be happening for us to get the relevant manipulation intuition. It certainly doesn't seem that we get that intuition in just any case in which the manipulator intends the agent's behavior.

I don't know if you'll find this at all helpful, but the diagnosis that Phillips & Shaw give in their paper is that the effect has to do with people's judgments of *causation*. In other words, the idea is that when the manipulator intends for the agent to perform the action, people are more inclined to think that the manipulator *caused* the agent's action. Do you think that this might not be happening for some reason in your case? I mean, do you think maybe the intention in that case isn't leading to an effect on people's causal judgments?

As a proponent of a manipulation argument, I would say that it’s a presumption of the argument that its target audience would initially tend to think that agents who are causally determined to act badly in a natural way – without intentional control, deserve to be blamed, while people who behave similarly but are intentionally causally determined do not deserve to be blamed. So the experimental result fits nicely with this presumption.

Most people enter into the free will debate with the assumption that ordinarily agents deserve to be blamed when they knowingly do wrong. For the natural compatibilist, the prospect that whenever we act, we are causally determined by factors beyond our control wouldn’t change this assumption. Incompatibilists believe that this reaction fails adequately to face up to the implications of causal determination. The way manipulation arguments aim to address this concern is by first devising an intentional deterministic manipulation case with the hope that it will be more successful at eliciting a non-responsibility intuition it its target audience than ordinary causal determination does. Crucially, the next stage involves having the compatibilist come to believe, upon rational reflection, that such non-responsibility is preserved even when the intentional manipulation is subtracted, on the ground there is no responsibility-relevant difference between the case that features intentional manipulation and one that doesn’t.

Josh says:

(1) The intentions of the manipulator have no bearing on the degree to which the agent is morally responsible.

Now suppose we learn a surprising empirical fact:

(2) Given the way human cognition works, people’s intuitions about whether the agent is morally responsible are influenced by their perceptions of the manipulator’s intentions.

(C) Together, these two claims would give us reason to think that our intuitions about manipulation cases were systematically mistaken.

This would then to some degree undermine a crucial premise in the manipulation argument for incompatibilism.

The proponent of the manipulation argument agrees with (and presupposes) (1) and (2), but is wary of (C) -- some reason, maybe, but not a lot. The question here is what the result of rational reflection will be when the target of the manipulation argument rationally reflects on the fact that she accepts non-responsibility in the intentional determination case but denies it in the ordinary deterministic case, despite no difference in the satisfaction of the standard compatibilist conditions on moral responsibility.

There are three salient possibilities. The first two involve accepting the no-responsibility-relevant difference claim: (a) either deny deserved blame in both cases (e.g., Patrick and me) or (b) accept deserved blame in both cases (e.g., Michael McK. & Carolina S). I think that accepting deserved blame in the intentional determination case is more of a stretch than denying it in the ordinary deterministic case; Michael and Carolina disagree.

It sounds like Josh and Matt accept a third option, that intentional determination really makes a difference to deserved blame. Suppose you know that someone’s bad action satisfies all of the standard compatibilist conditions on deserved blame, but you don’t yet know whether it was intentionally causally determined by an intelligent agent or causally determined by a robot without intentions – and let’s say the robot is not in turn controlled by an intelligent agent. Is it reasonable, upon reflection, to believe that whether he deserves blame depends on which way it turns out? My sense is that it isn’t.

Hi Derk,

Thanks so much for these incredibly helpful and thoughtful comments! Definitely very much appreciated.

Just to clarify, I was not at all trying to suggest that there was indeed a difference in moral responsibility between the two cases. Rather, what I was trying to say was that these experimental studies give us a better understanding of the psychological processes that drive our intuitions in them and might thereby help us figure out whether to trust them or not. (I was not trying to take any specific position as to which conclusion the studies support, just to say that they have the potential to illuminate these difficult issues.)

The studies show two things that seem relevant here:

(1) They show that people's intuitions are influenced by perceptions of the actual intentions of the manipulator. Thus, suppose we construct a case that is in a way intermediate between your Case 2 and your Case 3. In this case -- which we might call Case 2.5 -- the manipulator's actions are in every way exactly the same as they were in Case 2, but the manipulator has a different mental state. Specifically, in this new case, the manipulator does not have the intention of making the agent perform the action that he or she eventually performs. What the results show is that people attribute more moral responsibility in this type of case than they do in your Case 2.

(2) The studies show that this effect is driven by the impact of intentions on people's judgments of *causation*. People seem to regard causation as a matter of degree, such that they think there is a real question regarding the degree to which the manipulator caused the agent's action. Then they think that the degree to which the manipulator caused the action is greater when the manipulator intended the action than when the manipulator did not intend the action.

I was not trying to suggest that these two facts should specifically make us think that people's intuitions are more correct in the non-intended case than in the intended case. The point is just that having a good understanding of the psychological processes that drive this difference can give us a better sense for the question as to which of the two intuitions we should regard as more trustworthy.

[I apologize for the unclarity of what I wrote earlier, in my reply to Matt. I had the sense that he was thinking that the experimental studies could only be philosophically important insofar as they were taken to support incompatiblism, and I was trying to suggest that this was not the case -- but I fear that I did not express my point with sufficient clarity there.]

Jonathan and Joshua--

I think the explanatory potential that the perceived role that luck plays here is pretty important; it may be a deal-breaker in terms of relative buck-stoppingness here or there as breaking or failing to break significant causal chains in terms of an evaluative reverse temporal transfer of responsibility. My (untested) intuition is less luck involved, the more stable the transfer, the more luck involved the less stable the transfer. Thus there are probably even degrees of (un-)luckiness involved here about making such discriminations of responsibility. It would likely take many pretty fine-grained scenarios to make empirical headway here. But I think factoring in the role of luck is important to drawing conclusions about the perceived relative placement of responsibility as part of alternative narrative chains involving intentions.

I, on the other hand, will suggest that these results put pressure on the Manipulation Argument. In a response to an earlier version of Phillips & Shaw's excellent paper, I ran a case like Josh's 2.5 (using Mele's zygote argument set up) and found that, when the manipulator (Diana) foresees that her creation of the zygote would deterministically lead to Ernie's doing A but doesn't care about his doing A, people significantly lower their judgments of Diana's causing A, while increasing their judgment of Ernie's free will and responsibility regarding A, relative to their judgments in the case in which she does intend to bring about his doing A. Robyn Waller and Dylan Murray have each run similar studies and found similar results.

Now, one might suggest that there isn't a principled difference relevant to Ernie's FW and MR, so people are just not drawing the conclusion they should draw in the case of determinism without intentional manipulation. But one might also think that people's intuitions about causation are pretty reliable in these cases, and they are picking up on important causal differences between the cases. And they are then recognizing (for instance) that whether there is a causal source of an agent's action that more reliably explains their action than the agent's own deliberative activity, then that agent is less responsible than when there is no such causal source. The robot case Derk mentions would likely confuse people, since robots usually behave teleologically, but if it is made clear that the robot just creates (without a goal) a zygote that becomes Ernie who does A, then people would presumably treat it like the case where Diana doesn't care about what happens, and judgments of the robot's causing A would remain lower and judgments of Ernie's FW and MR would remain higher.

The paper Oisin Deery and I are working on aims to use causal interventionism to put flesh on the bones of these intuitions.

For now, however, I'd be interested to hear how advocates of manipulation arguments explain the different causal judgments people make in these cases such that those judgments should be considered irrelevant to people's judgments about FW and MR.

(Jonathan, can you remind us if people's judgments about the agents' causation of the outcomes differed between the cases--i.e., do people agree less that the agents caused the outcome in the case where the manipulators intend that outcome than in the case where they don't intend it?)

Hi Justin and Patrick,

The thought experiments that the two of you offered drew me to think a bit more about what was going on here (I shared the intuitions that the intentions of the manipulator don't seem to reduce MR for the agent in either of the proposed cases).

I ended up thinking that maybe the two cases you offered seem to be similarly different from the ones I tested in the paper. In the cases in the paper, the manipulator was always in a position to really change the situation in a way that had the potential to severely constrain the agent. In these sorts of situations, we noticed again and again that the manipulator's intentions changed the extent to which the manipulator was seen as *causing* the agent to do the immoral action, and correspondingly, that the agent was blamed less.

Yet in the both of the cases you proposed, I don't get an intuitive difference in either the moral judgment OR the causal judgment. However, this seems to happen for different reasons in the two cases. In Justin's case, the causal link between the manipulator and the agent seems so strong that the intentions of the manipulator end up feeling irrelevant. You might think of this as a sort of causal ceiling effect. In Patrick's case, I instead get the sense that the causal link between the manipulator and the agent is so weak that the intentions of the manipulator are again irrelevant, but for different reasons. You might think of this a causal floor effect. No matter what Bob's intentions are, I just don't get the intuition that Bob actually caused Fred to steal the expensive item.

I definitely had not thought about these sorts of cases with respect to the manipulator's intentions before, but seeing them side by side made me think that there was something similar preventing the intentions of the manipulator from having an effect. What do you two think? Do you both share these intuitions?

Great post, Joshua! Two quick points.

1. To follow up on Derk's point, can't the folk be making an error of reasoning when they suppose that the intention of the manipulator matters wrt the relative blameworthiness of actions performed by manipulated agents? I know that your initial post didn't take a stand one way or the other. But what do you think?

2. I love your work on causation! But doesn't the underlying model blow a hole in the ideal of causal ultimacy?

As I see it, you note that often causal judgments are based on views about the agent's intentions. So we're faced with a problem related to the one noted above: Why are the intentions of agents relevant to causal judgments?

But the explanation for this new problem -- as I understand it -- is that rarely does one single event cause another event. Rather, there is a spectrum of causal factors that influence various causal processes. The folk tend to focus on, say, the intention of agents but they go wrong because they focus on just one causal factor among many.

If this model of causality is correct, the libertarian ultimacy model must be incorrect. There is no causal chain that can ever trace back to a single agent, for in any causal chain there is a broader spectrum of causal influences than one single agent cause.

Let me go a bit further and ask, Why doesn't this vindicate a compatibilist view of free will? After all, most arguments for incompatibilism argue that no one ever has a choice about anything, given determinism. But on the model we're discussing there is a spectrum of causal influences for any given event, so there is ample room for agential influence.

Eddy,

I’m looking forward to seeing the final version of the paper by you and Oisin.

I agree that it might be hard for subjects, generally speaking, not to think of robots as acting intentionally. But in response to concerns of that type, Gunnar Björnsson constructed a scenario modeled on the Plum examples in which a non-intentional cause—a bacterial infection—slowly makes the agent more egoistic without bypassing or undermining his agential capacities. His thought was that if subjects were prompted to see the agent’s behavior as dependent on this non-intentional cause, this would undermine attributions of responsibility to roughly the same extent as the introduction of an intentional manipulator. This was indeed the case. In a study involving 416 subjects, the infection scenario undermined attributions of free will and moral responsibility to the same degree as intentional manipulation.

Just to clarify, I don't think intentional determination makes a difference to responsibility. My point (as Joshua correctly surmised) was just that *if* intentions were relevant, that result wouldn't be helpful for incompatibilist arguments, since the deterministic universe wouldn't have any mental states, let alone intentions. So the generalization move wouldn't go through.

But I must have mistook the point of the study, for Joshua takes the results to show that our intuitions about manipulation arguments to be systematically untrustworthy. But I'm never sure what to make of such claims. Since I don't share the intuition that the study produced, I don't see why my intuitions about the cases, such as they are, would be similarly rendered untrustworthy.

In any case, I have a question for Joshua. In all your interesting work on people's judgments about causation, is there evidence for thinking the folk don't have a theory of causation generally? My thought was that perhaps they are working with a morally-loaded notion, one that wouldn't have any application in non-agential cases. In agential cases, however, it is operating as either shorthand for or a major component of attributions of responsibility or determinations of blame. If that were so, we might expect people would take what count as causes to be influenced by agential factors, like intentions. (Apologies for not being sufficiently tuned in to the literature to know whether you've already covered this!)

V. Alan,

This point you are making about the role of luck strikes me as a very important one, and I think it might have the potential to help explain why one finds these differences in causal attribution. Specifically, it might be that people think that the connection between the manipulator's action and the outcome arose only as a matter of luck in the non-intended case but that it was not a matter of luck in the intended case. Then it might be precisely this difference that makes people see the intended case as more causal.

Joe,

You are completely right to say that there is some important sense in which any outcome has to be the product of an enormous network of different causal factors. Still, it seems that people's ordinary causal judgments are 'selective.' That is, people seem to pick out from among all of these factors some specific ones that they regard as the true causes of the outcome. Not sure if that helps at all with the problem you are posing, but it does seem like even though both the agent and the situation will always be in some way relevant to the outcome, people can make sense of the question as to which of those factors was truly the cause.

Eddy,

I ended up not asking participants to make judgments about the extent to which the agent caused the outcome. I thought about doing this, but the way that the cases worked, it was just so incredibly clear that the agent did in fact cause the outcome, so I didn't think it would be interesting to ask the question. For example, it's pretty obvious that the workers caused the attack on the village; there is really no other viable causal candidate.

I did think about one other possible way of testing the question you raised. It would be to have two people discussing the events that occurred, and then have one person ask the other about the cause of the outcome, e.g., 'What do you think caused the village to be attacked?' The other person could then respond saying, e.g., 'The workers [government] caused the attack.' Importantly though, the first person could then disagree, saying something like, 'I don't think that was really the cause of the village being attacked.' The question that participants would then answer is how much they agreed with the first person (the person who disagreed with the causal claim). This is obviously a little complicated, but when I try asking myself these questions, I do in fact get the sense that I agree with the first person more when the agent's actions matched the manipulators intentions than when they don't, which would be predicted on your account.

Derk,

Thanks for your thoughts on this -- it's great to hear your take on these studies. In your earlier comment, you pointed out that the way the manipulation argument is meant to work is first by employing a case that involves an intentional manipulator to elicit the intuition that the agent is not fully responsible and then:

'...the next stage involves having the compatibilist come to believe, upon rational reflection, that such non-responsibility is preserved even when the intentional manipulation is subtracted, on the ground there is no responsibility-relevant difference between the case that features intentional manipulation and one that doesn’t.'

This comment got me wondering how participants would react when presented both sorts of cases simultaneously. When faced with both cases, side-by-side, would participants make their MR judgments consistent (either rendering both agents less morally responsible, or perhaps rendering them both more morally responsible), or would they reflectively endorse the distinction between the two cases? I really didn't have much of a clear guess about how this would turn out, so I thought the easiest thing to do was to just ask them. I ran that study this morning. (NB: I used cases in which the manipulator either intentionally or unintentionally caused the agent to do an immoral action, not the cases that Josh posted originally.)

At least in this study, it turns out that participants did endorse the difference in moral responsibility between the two agents. They again said that the agent was less responsible when that agent's actions matched the manipulator's intentions. This was true across all four different scenarios I used (from Study 4a in the cognitive science paper). They also made the exact same distinction in the extent to which the manipulator caused the agent to act.* I genuinely found this somewhat surprising given that participants often exhibit a general tendency toward consistency in these sorts of within-subjects studies.

These data obviously can't settle any questions about whether one /should/ take the manipulator's intentions to be relevant to MR or causation (it is highly unlikely that participants were engaging in the sort of rational reflection that you originally meant to pick out), but it does provide evidence that people somewhat reflectively do take these intentions to be relevant to both MR and causation. At least to me, the obvious (and unsettled) question this raises is whether this tendency on participants' part is resulting from some sort of error or bias such that we can discount it.

Anyway, I thought you all might find this interesting.

*I'd be more than happy to make the data, analyses and materials available to anyone who is interested in seeing them.

Thanks so much Joshua. There's a lot going on in doing X-Phi, and frankly I'm not competent to criticize methodology, so anything I have to add is just the best use of wits I can muster!

BTW I should explain to some Flickerers why I use "V. Alan": my first name is "Villard", my father's name and a quite common one in a small region of the deep rural South where I was born. He hated it, didn't use it, and I follow suit. Because there is at least one other Alan White in philosophy (and not just the accomplished and late Alan R. White!), I use "V." as a a nom de plume identifier and nothing more. So call me "Al" as the song goes, or maybe better "Alan" so Mele doesn't get besmirched!

Matt,

Thanks for these very helpful further comments and, just generally, for all your thoughts on these issues.

Just to clarify, I was not trying to argue for any specific claim about which philosophical position these studies would support. My thought was just that the studies give us a better understanding of the psychological processes underlying our intuitions in these cases and that this information could be valuable in determining which of those intuitions would be most worthy of our trust. (Different philosophers might have different views about precisely which position the results support, but either way, it seemed that this psychological research had the potential to prove helpful.)

With regard to the question about causation, there is actually a lot of debate within the experimental philosophy community about how to interpret existing results. Some experimental philosophers think that people's causal judgments are being influenced by judgments of blame (along precisely the lines you mention). However, I actually disagree with that view. On the view that I favor, people's causal judgments are simply being influenced by the degree to which certain actions are seen as violating *norms*. Clearly, human agents can violate norms by performing immoral actions, but there is nothing distinctively agential about the relevant notion here. Objects that are not agents (and hence not appropriate targets of blame) can still violate norms and can therefore show all the same effects we find for human agents.

Jonathan,

That's a really exciting new result! I'll be really interested to hear what Derk and others make of it.

Alan,

Thanks for the name clarification. I'll be sure to call you Alan from now on, but I'm delighted to know that your actually first name is such an exotic one!

Hi Josh,

Thanks for the followup -- I'm enjoying the exchange, too. I get that the claim here is just about some useful data that might inform the psychological processes that produce the relevant intuitions. My question then is about what sort of psychological processes we're getting information about. Or, put another way, how systematic are the results.

I take it that certain systematic mistakes, like the muller-lyer illusion, are very widespread. And then we can appeal to a shared visual system to help explain why. But even statistically significant results about folk intuitions often leave large numbers of folk who didn't share the intuition.

In any case, (and because maybe I'm wrong and virtually everyone responded in the same way), what should we say about folks who don't share the intuition. One of the main claims from the original post was "that people's intuitions about manipulation cases are affected by their perceptions of what is going on in the manipulator's mind". In order to get to claims about systematic mistake, I should think we need to say something about those who don't share the intuition.

The comments to this entry are closed.

Books about Agency


3QD Prize 2014: Marcus Arvan