Blog Coordinator

« LAST CALL: APRIL 15 deadline for submissions to FW Conference* in SF | Main | Compatibilism and the Contours of Free Agency »

04/09/2013

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Very interesting post. I hope my comment isn't too off the topic. As far as I know, Pereboom's cases in favor of incompatibilism all involve manipulation of one sort or another. (It's true that he imagines a case in which the manipulation is caused by a machine rather than by a human person, but the machine he describes is "spontaneously generated without design," in which case the machine *is* a human person for all that matters here.) Why not take it as a basic datum -- already accepted, as far as I know, by all sides in the debate -- that one's having been manipulated, as such, undermines one's moral responsibility? If we prefer to talk in terms of degree, then one is morally responsible only to the degree to which one wasn't manipulated. Isn't that common ground? If so, then examples involving manipulation are powerless to show that determinism threatens moral responsibility, because it's already threatened by the presence of manipulation. Even the hint of manipulation therefore vitiates Pereboom's cases.

It's an interesting psychological question (which x-phi should investigate) why we're tempted to conflate determinism and manipulation. I'd wager that it comes from our evolutionarily favored tendency to impute mind and agency pretty much anywhere we can, even to mindless causes.

Interesting stuff, Michael. I have a lot of thoughts about this, but let me just focus on one point. I think your philosophical hunch (about intuitions about weird cases being less trustworthy) is false. I think a much more important variable is the *strength* of the intuition (and the speed with which we have the intuition when we hear the case). So consider a case in which a Martian has implanted a device in my head and is controlling every choice I make by remote control, without my realizing it. We have a really strong (and really quick) intuition that in this scenario, I'm not free. This is pretty far from reality. But why should we doubt the intuition for that reason? If we compare this to a case that's "close to home" but really tricky (suppose you have to think about the case for a few minutes before you even know what your intuition is), then it seems to me that we should trust our intuitions about the former case over the latter case.

If I'm not mistaken, this fits with how linguists and psychologists think of things. Speed of intuition matters. It's hard for me to understand why distance from home should matter. I feel 100% confident in my intuition about the Martian remote control case. Why should I doubt my intuition because it's a weird scenario?

We can explore the spectrum of different cases (i.e., closer-to-life vs. further away) when considering whether or not determinism undermines freedom and responsibility, but I don’t believe analysis of any of those cases will bring us closer toward the truth. The same fundamental issue exists all along the spectrum, and intuition probably won't provide the answer.

In order to find resolution, I’m thinking that we need to focus on understanding control. What is it that actually controls an agent?

On the final point, re: whose intuitions- doesn't it depend on the project? If the project is to assess ordinary convictions, it seems that folk or "uninitiated" intuitions should be privileged. If the point is to assess something like "best set of philosophical commitments, all things considered" then I don't see any reason to think that judgments of the uninitiated are especially profitable to rely upon.

I don't understand why/how cases such as the birth of a child are supposed to count as manipulation (or watching a loved one suffer as in the case of Ann). If these mundane cases are actually cases of manipulation, then I'm afraid the term is too broad to be informative. It seems like everyone has been manipulated.

Michael, As always, a fascinating question. First off, I strongly object to the idea that we should allow those who have thought carefully about these questions but remain agnostic to decide the issue. So far as I can tell, Al Mele is the only person who fits into that category; and as much as I admire Al's work, I don't want to give him that power. The last time I agreed to make Al the arbiter of a philosophical issue, I wound up owing him a T-shirt. But on this question of “intuitions of the theoretically unpolluted,” I suspect such persons will be very difficult to find. It seems to me that most folks come to these questions with all sorts of theoretical preconceptions, many of which come out of exposure to religious views. When students try to consider a case of determinism, they frequently mix in questions concerning predestination; and they have a hard time thinking of determinism without also thinking of a purposive controller (perhaps, in response to Steve‘s comment, that is why it is so difficult for people to distinguish determinism from manipulation). And I suspect that concern over the injustice of being predestined to hell lurks in the background for many folks -- and that seems to me a very serious form of theoretical pollution, that influences the strong commitment to libertarian free will. And my guess is that there are other strong influences operating. Think of the controversy in the U.S. Presidential election of 2012, when Obama (perhaps inspired by Elizabeth Warren) gave the speech about “if you have a successful business, you didn’t make that,” because you had the help of teachers and workers and those who built the highways, etc.; in reaction, many on the right waved signs saying “I built it.” That is indicative of a strong difference in views between “neoliberals” and those who have a “social democratic corporatist” view, on the question of individual responsibility. The possibility of finding “theoretically unpolluted” intuitions on these questions may be about as likely as discovering a “view from nowhere.”

Just a couple of thoughts on the interesting question about closer-to-life vs. further-away cases. I think you are right that we should give less credence to intuitions about *some* further-away cases. Cases that are so weird that they are really hard to conceive of, for example, won’t clearly yield intuitions that are responsive to the cases as described. (Some of the intuition pumps about personal identity strike me as being in this category.) But maybe I’m with Mark when it comes to straightforward, albeit further-away, cases. They even have one potential advantage over closer-to-life cases, and that is that we make lots of assumptions about our actual lives that might conflict with the stipulated aspects of a particular case. For example, there is some evidence that many people assume that the actual world is not deterministic (a Nichols and Knobe study found that some very large percentage of subjects agree with the claim that the actual world is not deterministic.) Thus, there is a danger that even when a closer-to-life case is described as one in which determinism is true, subjects will not really accept this stipulation. Using further-away cases can help make sure we are really imagining what we are being asked to imagine. Of course, we may bring hidden assumptions to these cases, too, but there is reason to think we can avoid at least some of the ones we bring to closer-to-life cases.

Miguel,

Great stuff. A thought on your closer-to-life hunch. One way to give it some grounding might be to follow the lead of some folks who have expressed general skepticism about our modal intuitions. If I understand these people, and PvI seems to be one of them, the idea is that complex judgements about what's possible— about, for example, which worlds could or could not have been actualized, etc— are a long ways from the contexts that our evolutionary history would have made us adept at operating within. The suggestion seems to be that we may be able to have confidence in our modal judgements when we can trace them to the emergence of mechanisms that would have served us well in darwinian competition. But whether I could have been an alligator or whether a being than which a greater cannot be conceived is possible are clearly not the kinds of judgements we should expect our development to have given us good tools for making. (By contrast, judgements of the form "that thing is faster than me... and also predatory," well, we can expect to be pretty good with these).

So, here's a related strategy for your approach. Try to show that the on-the-ground, real-world cases tap into mechanisms of assessment we should expect our evolutionary progenitors to have passed on to us; whereas Derk's dumb cases all appeal to judgements we have no good evolutionary reason to think we would be good at making.

I don't know if it will work; and, as you know, I don't want it to work because I agree with Derk here. But there it is. And it MIGHT work.

"I don't understand why/how cases such as the birth of a child are supposed to count as manipulation (or watching a loved one suffer as in the case of Ann)." --Matt

Neither do I. Are they supposed to count?

I meant to include this bit in my comment about conflating determinism and manipulation. Others on this site know the literature far better than I do, but it's not hard to find examples of the conflation. In "The Case for Incompatibilism" (PPR, 2002), Gideon Rosen says that if it's determined by factors beyond the agent's control that the agent will fail to exercise his capacity to do what he should, "then it is as if he has been set up to fail." But no one is "set up" by determinism; being "set up" requires a *planner*. It seems to me that Rosen's "it is as if he has been set up" is more than a figure of speech here; I think he means it to do argumentative work.

Though we might acquire concepts by exposure to real life cases, it doesn’t immediately follow that concept application is less reliable in certain far out cases. Maybe the following analogy is helpful. We acquire the concept of ‘cat’ by being presented with real life cases of cats. According to one way of thinking about concept acquisition, we acquire the ‘cat’ concept by internalizing a prototype of a cat. Concept application on this view boils down to a similarity judgment. So, if asked whether an animated 3D hologram (generated by Martians, of course) of a robotic dolphin is a cat, wouldn’t we quickly and reliably judge that it isn’t? It’s a far out case, but we have no problem reliably employing the concept ‘cat’ to make a judgment. This might lead us to think that the ability to reliably assess something for similarity to a prototype is not diminished just because the case differs markedly from the kinds of cases that were instrumental for prototype acquisition. Indeed, as Mark and Dana have suggested, similarity judgments of this kind might be easier if the effect of the case’s far-out-ness is to make it obvious that the case lacks certain central features of the prototype concept.

An obvious reply to this is that the 3D dolphin hologram differs from normal cat concept application cases along the wrong dimension. The 3D dolphin case is far out precisely because the features likely to be found in our ‘cat’ prototype aren’t even remotely present. This makes the prototype similarity judgment really easy and seems to show that bizarreness of a case is not necessarily worrisome. To make Michael’s point, then, we should restrict our attention to bizarre cases where the features of the prototype concept do not differ markedly from how things are in ordinary cases, but where the context or perhaps the causal history of those features does differ markedly. So, the relevant far out cat case would be something like a flesh and blood creature that looks and behaves like a cat, but was grown in a lab by Martian biologists. In this bizarre case, it is harder to make a judgment (the lab-created thing looks and behaves like a cat, but lacks certain historical properties that may be part of our cat concept). This difficulty probably stems from the fact that our prototype concept of ‘cat’ is incomplete or vague. Though reliable judgment in ordinary cases is consistent with a certain degree of conceptual vagueness, judgments about bizarre cases that retain certain prototypical features of the target concept can be more difficult to make and might be unreliable.

Analogously, given that our concept of moral responsibility is likely incomplete or vague, maybe we shouldn’t trust our applications of it in far out cases that (1) retain prototypical features of moral responsibility and (2) give those features bizarre historical or relational properties. Since Pereboom’s first and second cases arguably fit this description, there may be reason to think Michael’s hunch is right after all. Because the cases are bizarre in the relevant way - they test the limits of our concept of moral responsibility without violating them outright - we might not be able to make reliable judgments about them.

A problem with imagined defense of Michael's hunch is that this way of sorting out relevant bizarre cases from irrelevant ones can only be done if we know in advance which features a target concept actually possesses. We can say with confidence that part of the ‘cat’ concept is inter alia that cats are flesh and blood mammals that look really cute when they meow ( http://www.youtube.com/watch?v=z3U0udLH974 ). This makes it easy to sort cases as I did above. However, this sorting process is much more difficult in the case of moral responsibility since there is considerable disagreement about what features a prototypically morally responsible agent possesses. In order to judge that an agnostic’s intuition about Pereboom’s case 1 is unreliable, we must be able to say that is *relevantly* bizarre. This requires us to be able to say confidently that it (1) contains many core features of moral responsibility, but (2) is bizarre with respect to the causal history of these features (or something like that). But, we can’t establish (1) unless we presume that compatibilism. This is surely an illegitimate assumption to make, given the context. Since we don’t yet know what the central features of morally responsible agency are, we need another way of distinguishing between the kinds of bizarre cases that support Michael's hunch and those that don’t.

Bruce,

I'm ok with your not giving me the power you mentioned. The great Penguins T-shirt you gave me makes up for it!

Hi All, Thank you for the great comments. Man, this is harder to do than I expected. It’s been 24 hours and I have 11 comments already. Let me try to respond briefly to everyone and then later in other posts I’ll expand upon a few points:

Steve, really glad you have joined our site. I think your suggestion that any manipulation should vitiate, at least to some degree, responsibility won’t help, I fear. If so, none of us would be responsible for buying our crappy used cars from crafty car salespersons. Both compatibilists and incompatibilists can agree that some manipulation, the sort that does not impair our ability to act freely, is not responsibility undermining.

Mark, I sympathize with your proposal. But what if we hold fixed the variables about strength of intuition (and a measure of it in terms of speed)? Suppose that in such a case the verdicts of the close to home and the far away cases conflict. Now do we have any principled reason the favor the closer-to-home ones? I have more to say about your Martian case (the Martians could be like the moment-to-moment controllers in Pereboom’s Case 1), but I’ll set that aside for now.

James, maybe ‘intuition’ will not help. And again, there are tough questions about how we are using ‘intuition’, but to the extent that manipulation arguments are at the center of the current debate, there is certainly something like intuitions that are playing a key role. The role these ‘somethings’ are playing here is like the role played by our judgments about Thomson’s trolley cases, or Singer’s rendering aid cases, or Gettier’s lucky true belief cases. Something important is being tested and disputed here.

Manuel, you’re right. But I think describing one sort of projects as *merely* assessing ordinary convictions suggests a deflationary picture of one sort of philosophical enterprise—namely one in which those convictions provide evidence for the content and contours a concept.

Matt, well put! But note that Arpaly does not use the term ‘manipulation’ in describing these cases. And I do, but, at least when I am not being careless, I put scare quotes around them, as in my case of Ann. But so what if they are not, strictly speaking, manipulation cases? My opponent running the manipulation argument makes use of some premise or other that such and such kind of manipulation is no different in any relevant respect than is being determined (in a certain sort of way). Note also that Pereboom himself, in his Case 3 seems to have in mind a history for Plum relevantly like some of the cases I have suggested. And I guess I would note that, indeed, you are correct, it seems that all of us or most of us have been “manipulated”. Compatibilists, I say, should make this point their friend.

Bruce, I really like your post. I agree. This is hard. The theoretically unpolluted are likely a mythic group. And the philosophically uninitiated often seem to have their views distorted by all sorts of nonsense. But what’s left is just to let Al decide. And he’s got enough t-shirts, damn it all! Actually, seriously, this worry of yours does make me think maybe we should lean in the direction of Al’s strategy, or even revisit the presumption that Al, Derk, and I share, that the intuitions of the theoretically committed should not get to count. I mean, I admit that *I* have the intuition in Pereboom’s Case 1 that Plum is not free and responsible. It’s not like my theoretical commitment is straight-jacketing me from seeing the intuitive pull towards an incompatibilist diagnosis.

Dana, great point! I think there is wisdom in this. Further away cases sometimes help us to gain a clearer resolution. (This is something Fischer often notes in reflection on Frankfurt cases.) Nevertheless, if we attempt as best we can an other-things-being equal qualification (to address Mark’s challenge), I still think there is something fishy about allowing our judgments about very esoteric cases to trump our judgments about cases that inform—oaky I’ll use the expression—our form of life. This is connected up, I suspect with Dan’s point.

Dan, you’re the only one who came to my aid, brother. I was feelin’ no love. No love at all. Not until you posted. Thanks. I owe you a beer. Anyway, I actually did have some ideas like the ones you were floating in mind. I was also thinking about the relation between the proper extension of a concept and the process of coming to learn or acquire it; conceptual competence in acquisition is tested in cases that bear on our living our lives, not that bear on our flying off to distant planets with talking toasters. So somehow the closer-to-life cases should be favored in some way. But again, I confess, I do not have full arguments here.

Steve, your remark about Gideon Rosen quotes him as say it is “as if” he has been set up to fail. In fairness to him, I think we need to allow the ‘as if’ to remain. And in that case, he is just operating within the dialectical boundaries of the debate in which incompatibilists like Pereboom want to say that determinism is no different in any relevant respect (it is as if…) than manipulation.

Thanks everyone for the comments! Keep ‘em coming. Hope I can survive this!

Michael,

Thanks for your replies. I hope we can agree that manipulation is an important concept (indeed, I'd say the most important concept) for compatibilists and incompatibilists to investigate. I predict that the clearer we get on that concept, and the more careful we are to distinguish it from other concepts, the better compatibilism will look. Rosen exploits a widely held intuition: insofar as someone has been literally set up to fail, to that degree he/she isn't morally responsible for failing. But then Rosen uses a perfectly hand-waving "as if" to link that intuition to a very different claim about the effect of determinism on moral responsibility. As if! What does it mean, anyway, for it to be "as if" someone was set up without in fact having been set up?

Echoing Bruce, so a dogmatist like me need not apply? But I never treated my intuitions as evidentiary in the 1st place. More like conversation starters a la Aristotle or Velleman. (Cf. My debates with the X-phi folks @ GFP.) What's more, I'll part company with them faster than I split from my wife at the mall. (I just wish I had more pocket-sized philosophy books.) I know the truth: FW is rectitude maintenance for its own sake. And I never let intuitions stand in the way of a good philosophy. So, e.g., I once intuited that there were freedom relevant differences between Pereboom’s manipulation cases and ordinary deterministic situations; in other words, I intuitively avoided the conflation of which Steve writes. Then I realized my intuition spawned a philosophy inimical to the aforementioned truth. Chastened and somewhat reluctant to trust an apparently corruptible top of the head, I now carry my well-worn (but sadly bulky) copy of Anselm’s Major Works wherever we go, just in case.

Michael, you suggest that we should put less credence in intuitions when they are responses to “bizarre cases”. But this can’t be right. Galileo imagined planes with no friction, were of infinite length, and so on. Pretty bizarre stuff, or at least far removed from ordinary experience. But his thought experiments, though bizarre, are immensely clarifying. The intuitions they elicit are deep and important, and established important principles in physics.

The problem with Manipulation Cases is that not that they are too bizarre. Kind of just the opposite. The phenomenon of manipulation is all too ordinary and ubiquitous and it is something we genuinely hate, and for good reason. That’s why when presented with the text of a standard manipulation case, we have a REALLY difficult time imaging the case as intended. We import our ordinary reaction to everyday varieties of manipulation into the case. The result is the world famous “manipulation intuition”—manna for the incompatibilist.

I agree with you that the way to deal with manipulation cases is work really hard to make manipulation cases clearer, and make possession of the relevant agential factors really salient (i.e., your PPR 2008 paper). But the fact that we have to do this—and more specifically the fact that we have to work REALLY HARD to do this— illustrates that Manipulation Cases are unreliable not because they too bizarre, but rather because they are too close to ordinary, real life cases in a way that distorts our intuitions.

Michael, another great clear question.

A question about the question: might it be objected against the primacy of real-life cases that they also problematically emphasize very different instances of free/unfree actions involving explicit external sources as opposed to more subtle internal ones of free/unfree mind/brain manipulation? I'm thinking of examples of blackmail and extortion in the first case and racist skinhead socialization in the second. Both are real-life cases yet they differ with respect to ease of pointing out influence on an agent. Someone blackmailed to act may have no choice in any sense assessed by ordinary values of justice; someone socialized to hate arguably either is caused (in some sense) to hate or freely (in some sense) chooses to hate. Maybe these real-life cases point to temporal distinctions about agents and freedom--the more time involved in producing a behavior that is related internally to an agent, the more we tend to blame the agent involved. Maybe they point to a classic difference about external versus internal sources of action as thus time-assessed. But my sense in any case is that people who are blackmailed get a lot more intuitive slack than skinheads, and not much about compatibilism or incompatibilism rides on those judgments, because “intuitive” judgments here are driven more by socialized levels of value-comfort than anything else. Intuitions about real-life cases are by my lights inherently socially contextual. Therefore nothing of metaphysical significance is entailed by evaluating a case as real-life, since such evaluation only reinforces accepted social contexts surrounding it.

Thanks for generating this very interesting discussion. The question of which (if any) features of a thought experiment give us reason to give more weight to the intuitions they elicit strikes me a very fascinating one. As Mark, Dana, and Chandra argued, I don't think that the mere unusualness of a case is an important factor. Indeed (and here I think I am just repeating Chandra's nice comments) the unusualness of a case might be a reason to give the intuitions it elicits more weight.

I also think Dana is right on the money in noting that if the unusualness makes the case hard to conceive in certain respects then *that* might be reason to give the intuitions elicited by the case less weight. But then say she points out, more familiar cases might also for that reason be too complicated.

I have real worries about the thought that various philosophical arguments should be directed to undecided, open-minded inquirers, and that the success of an argument is determined by some ratio of these persons being convinced (have I got the idea right?). Who are these people? Are they are real or are we just to imagine how we think they would respond? Maybe we need to punt these issues to the x-phi-ers. But I am inclined to think many of the same issues will arise there. Who judges who is open minded and undecided? Who decides how to word these questions? These issues seem to embroil us in disputes that will likely end up dividing us along normal party lines.

Let me be clear that I do not think that this means x-phi work is unimportant. I do doubt, however, that it will break as much new ground (though it does certainly break new ground!) as some seem to suggest.

Alan,

Spot on; there is precious little metaphysics going on with real life reactions. They basically reflect a deep-seated tendency towards conformity, which is why they can philosophically serve only as conversation starters, a la Euthyphro's definition of justice. Students with the strongest intuitions, in my experience, tend to be those most practiced at reciting conventional wisdom, or, at least, the mantras of their peers. Confusing freedom and license, they are only too willing to treat those doing what they want because they want to do it in Frankfurt cases as acting freely. They are not a tough sell at all.

Chandra,

I fail to see why a case's proximity to real life would tend to "distort our intuitions." On the contrary, we routinely entrust juries to sort out the facts and arrive at reasonable judgments in criminal trials in which manipulation is an issue. Or: I sense the walls closing in on me as the empty-headed, careerist administrator attempts to persuade me to "make allowances" for what I consider to be nothing but diffidence. I've been around the block numerous times with her type; I know that she's twisting my arm, trying to get me to lower my standards- and I hate it, if not her. (Thank goodness Materialism is false, otherwise there will come a day when we'll be unable to keep contempt under wraps.) Should I "import" this reaction to an esoteric example, horrified by the plight of someone without even a fighting chance of resisting psychological tyranny, wherein lies the distortion?

Is it really true that when we attempt to apply concepts like ‘free will’ and ‘moral responsibility’ to deterministic manipulation cases, we’re applying them in contexts that are bizarre in the sense that they are contexts in which these concepts have been at best been only sporadically applied, and that for this reason we cannot be confident about judgments involving these concepts in these contexts? For many centuries a perennial question in theistic cultures has been whether we can be morally responsible given divine providential control. And in the monotheistic religions, theological determinism has always been a live possibility. Can we be blameworthy for our bad actions if they’ve been determined in accord with the divine plan? is a question many millions of people have considered very seriously on many occasions. And Martian and neuroscientist deterministic manipulation cases vary only in minor ways from their theological counterpart.

But suppose we agree that application of concepts like ‘moral responsibility’ to deterministic manipulation cases involves applying them in contexts that are bizarre in the sense just specified, and for that reason our confidence in judgments that involve applying them in these contexts should be reduced. Now we might ask: to what degree should our confidence be reduced? Enough to cast serious doubt on the manipulation argument? Such questions of degree are hard to answer. But one measure involves the extent to which one is willing to generalize the strategy. Making judgments that feature the concept ‘meaningful life’ to Nozick’s experience-machine thought experiment involves applying a concept in a bizarre context. Should this cast serious doubt on the experience-machine argument against hedonism? Should pointing out that Chalmers’s zombie scenario is bizarre count as a strong indictment of the judgments that fuel the conceivability argument? Fake barn country is bizarre -- should we agree that this is counts as a strong objection to arguments about the nature of knowledge based on this thought experiment? My sense is that even if we should be somewhat wary of thought experiments involving bizarre or unusual scenarios, in each of these cases pointing out that the scenarios are bizarre doesn’t count for much. What’s more, if we reject these philosophical arguments based on thought experiments involving such bizarre scenarios, we would seem to be giving up on the only way – or at least the main way – we have for adjudicating the question being asked.

Would it count as manipulation if I made 16 points for you to respond to, Michael? Well, I won't, though your questions are great. For now, just one point:

1. I'm with Chandra that the problem with manipulation arguments is that they manipulate our intuitions, precisely because we have such strong intuitions about real-world manipulation, which is the worst possible threat to free will. In real-world manipulation our own rational processes, reasons, deep self, etc. are *bypassed* by someone's else's desires who has the power to ensure we'll do what he wants, typically without our knowledge and such that we can't even try to do anything to prevent it (unlike other threats such as coercion and compulsion). I discussed these issues on this blog a while ago here: http://agencyandresponsibility.typepad.com/flickers-of-freedom/2010/05/the-manipulation-in-manipulation-arguments.html
And as you know, I'm working on a paper that responds to MA, in part based on how it manipulates our intuitions.
(With my collaborators, I've also got some studies on this, but let me plug Chandra's great paper in PPR on this.)

Regarding your main question, I think it's impossible to find the perfect audience--the goal is to find people with theoretically uncorrupted intuitions who understand the relevant cases, but we are all subject to the very psychological influences we need to understand better (e.g., the subtle ways manipulation cases influence our causal and counterfactual judgments).

Incompatibilists don't mean to conflate the term manipulation with determinism, they are using the word in the inexact, metaphorical sense for a particular rhetorical effect. They know it will evoke strong "intuitions," and as "intuitions" are the only thing compatibilists will consider, incompatibilists must occasionally sacrifice exactness for the greater good.

This is just a quick comment in response to Phillip's post. I'll then draft a different reply to all of your more recent contributions. Phillip, I'm sorry I did not reply to you in my previous string of replies yesterday. I had not seen your post until after I had hit 'publish' on my replies to everyone else. Anyway, let me just say, I thought it was very intriguing. And I was grateful to see that, along with Dan Speak, at least someone was trying to lend me a bit of support--unlike all of my other so called "friends" out there who are just piling on--you big meanies! So, I owe you a beer too.

Anyway, Phillip, I found what you had to say especially keen, since you were exploring a principled basis for when more esoteric cases should be regarded in a more skeptical light in our theorizing. (I'll have more to say about this in my comments to others.) In any event, what I did not follow is just what you wrote in closing. I would have thought that your condition 1 was not the problem, since proponents of the manipulation argument *want* the pertinent cases to have lots of the features of moral responsibility present. Right? It doesn't beg the question in terms of compatibilism to think this about the internal or intrinsic features of the featured manipulated agent. I would have thought the burden for me is with your condition 2 (something Derk speculates about in his recent comment). Some might say the history is not all that bizarre or far away. Anyway, I found your comments very suggestive. I'll have to think more about this. Thanks again!

Thanks for the post, Michael.

Regarding casuistry, I would like to raise a point similar to Manuel's: it depends on the project. One kind of philosophical project (the one that I aspire to do) is to promote conceptual change to understand real life cases for which there is no good explanation available. Clearly, in that kind of project, observed and well-documented cases take precedence. Notice, though, that this is not to deny that there is room for thought experiments. Even though in this kind of project the ultimate goal is not to get clear about them, hypothetical scenarios might be useful, say, as stand-in for control conditions that do not or have not actually obtained.

A premise of manipulation arguments is typically something like, "The manipulated agent s isn't responsible for A-ing." We disagree about whether this premise is true.

What can we do when we disagree about whether it's true that P? One thing we can do is inquire whether P. A way to do this is to see if there are arguments having P or not-P as conclusion.

Another thing we can do is discuss the cognitive processes that are involved in judging the premise in question. Or who might be in the best position to judge whether this premise is true--whom to count as an authority. I suppose these inquiries might contribute to an argument with the premise or its negation as conclusion.

Michael, if I understand you correctly, your suggestion is that we pursue something like this last strategy. That might be fruitful. But why think that no inquiry of the first sort will do--that is, one focused directly on whether the premise in question is true, seeking arguments for or against the premise, without the detour into cognitive psychology or trustworthy sources?

The comments to this entry are closed.

Books about Agency


3QD Prize 2014: Marcus Arvan