Blog Coordinator

« My Tentative Plan for January | Main | Optimistic Skepticism and Forward Looking Blame »

01/03/2015

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Thanks for your thoughtful post, Justin.

Consider:

If S ought not do have done A, then S could have refrained from doing A.

Suppose S is driving in a dangerous area, and he knows that he really would need to look around to make sure that there aren't animals coming toward the road, etc. But nevertheless, for his own reasons (say, of laziness), he keeps his eyes focussed straight ahead. In my view, he ought not keep his eyes focussed straight ahead; he has sufficient reason not to keep his eyes focussed straight ahead. And yet we can also suppose that, unbeknownst to him, he has been afflicted just prior to the driving episode with a form of paralysis which would have made it impossible for him to move his eyes at all. So he couldn't have refrained from keeping his eyes focussed straight ahead. (Harry Frankfurt offered a similar Frankfurt-style case for omissions.)

The tie between moral notions, such as ought and having a sufficient reason (an objectively sufficient reason) to act and alternative possibilities is not as straightforward as you here imagine. Yes, various philosophers (including Ish Haji) have made arguments for the link with alternative possibilities, but various of us would wish to resist the arguments, forceful as they are.

Thanks for this post - this is an issue I've been pondering recently too, and I look forward to hearing what everyone has to say. I think I agree with you that it would be a substantial loss if we had to give up moral 'oughts' of specific agent demand. What's less clear to me is why the free will skeptic (or anyone who thinks that causal determinism is incompatible with the ability to do otherwise - which of course also includes many compatibilists about moral responsibility and causal determinism) has to give them up. That is to say, I am skeptical about the OIC principle.

I agree that the OIC principle seems plausible in the Leroy example. It makes no sense to demand that Leroy save the drowning person, because no matter how Leroy deliberates or what he decides to try to do, the person will drown. In this case, the man's fate is completely independent of Leroy's deliberation and choice.

But that's very different from cases in which the outcome *does* depend on what the agent chooses. In those cases, it seems to me that specific agent demands make perfect sense - even if determinism is true, and even if determinism rules out the ability to do otherwise. Imagine an alternate version of the scenario involving Larry, who is in perfect shape (and who also happens to be an excellent swimmer and a trained lifeguard). If Larry chooses to save the person, then the man will be saved; if not, then the man will drown. And suppose that determinism is true and that this rules out the ability to do otherwise. Larry's choices and actions are causally determined just like everything else. In this case, I think we can make a 'specific agent demand' - we can say that Larry ought to save the man. And it makes sense from Larry's perspective, as he deliberates about what to do, to recognize such a demand - even if he ultimately decides not to act on it. If Larry refused to save the man, and then tried to argue that since his refusal was causally determined he therefore never had any moral obligation to save the man, I don't think we'd be very impressed with his excuse (just for fun, here's a comic I made that tries out this sort of excuse: http://chaospet.com/248-the-ultimate-excuse/ )

In other words, agent specific demands seem to me to be closely connected to what makes sense from the standpoint of practical deliberation, and not to any metaphysical claims about abilities to do otherwise than we actually do. So the only version of OIC that I would see any reason to accept is one that is formulated in terms of ability or control in this sense (maybe something like Fischer's 'guidance control').

Anyway I hope what I'm saying here makes some sense (I am a bit rushed right now, so sorry if I'm not being very clear), and I look forward to the discussion!

Hi Justin,

I only have some preliminary thoughts (and I see while I was writing this post, John already raised an objection similar to the one I raise below), but it seems to me that moral obligations in specific cases are not obligations to bring about results in a strict sense, but obligations to make choices, which depend on the available information, rather than on actual powers and/or abilities and opportunities.

For example, let's consider a variant of your scenario, scenario S2 (which I now see is relevantly similar to John's):

Leroy-2 was never paralyzed, is in great shape, and properly believes that he can easily save the drowning man DM (in probabilistic terms, he assigns a very high probability to the event that he will succeed if he attempts to save DM, and his assessment is epistemically proper given the information available to him).
As it happens, some alien experimenters abducted Leroy-2 earlier, put a chip in his head (which he has no way of knowing; they erased all memories of the event), and the chip will paralyze him if he attempts to save DM.
In scenario 2, Leroy-2 has no power to save DM, but if failed to choose to save DM, his behavior would be just as immoral as if there were no chip (it might be suggested Leroy-2 has the power and/or ability but not the opportunity; I'm not sure, but one can just modify the scenario suitably if needed).

In S3, Leroy-3 is in a wheelchair; his state of mind is the same as Leroy's, and he properly believes he does not have the power to rescue DM. In reality, though, aliens planted fake memories in Leroy3's head. He was never paralyzed before the aliens abducted him, and at this point, he's paralyzed only because a chip in his brain is keeping him like that. Should he decide to save DM, the chip would deactivate immediately, and Leroy-3 would be able to stand up, run, swim, etc., and rescue DM.
However, he does not make the choice to jump into the lake and rescue DM, because he properly reckons he does not have the power or ability to do so (though he is mistaken).
Then, Leroy-3 did not behave immorally (S3 is not an objection to the principle, but I think it highlights some of the conditions for the existence of obligations).

Based on those and other cases, it seems to me that the obligation is to make a choice (or refrain from making it), rather than to bring about some result, and whether there is a moral obligation depends on the information available to the agent (Leroy-k, in this case), but not on whether the agent actually has the ability and/or opportunity to bring about any results.

Ordinarily, people would say that a person has the obligation to jump and save DM (not to make the choice to choose to try); I think those statements are true (if he has the ability, etc.), since ordinary language is not committed to the kind of precision required to address such unrealistic cases.

Granted, one might posit variants of the principles under discussion, but written in terms of having the ability or opportunity to make certain choices, but I think such variants might need to be addressed separately.

Hi Justin, and a great kickoff post.

About the two "oughts". The distinction Derk recommends (pun?) seems susceptible to a kind of axiological asymmetry Wolf observed about MR. Namely if we use ought in retrospective criticism of someone who has done something bad, then we seem to assume something about an alternative sequence as an accessible route in terms close(r) possible worlds (thus coulda-can along with shoulda). If we use ought in predictive recommendation (or counterfactual retrospect), then we attach some positive value to some sequence whether probable or merely possible, and that difference might mean something about entailments of can. Note the Lebron ought uttered in the proper historical context would be a probable outcome as normatively predictive, and the use of the term more strongly indicates that probability; an utterance "Hilary ought to win in 2016" is not improbable (prez elections and turnout for dems), though I'd think not as likely as a normative predictive compared with the Lebron case. So the recommending use of ought is contextually sensitive to whether something is likely or more remote. Thus there is a sense that even the recommending use entails can as a function of closer or more remote possible worlds--Lebron can (could) more than Hilary can, and I'd assert that difference irrespective of whether either won or not.

Time for a Saturday night movie.

Great post, Justin! I've been thinking a good bit about these issues myself.

You ask "can one be obligated to refrain from doing act A, if one cannot refrain from doing act A?"

I read this as another way of asking "can it be true that one ought not refrain from doing A, if one cannot refrain from doing A?" Does that sound fair? If so, then here's what I've been thinking. See what you think.

I think that if it's immoral to A, then I ought not A. But, if this is right, then I think one *can* be obligated not to A, even if one cannot help but A. Suppose Sam exists in a world where all he can do is torture little babies for fun. I think it's necessarily immoral to torture little babies for fun, so I think that Sam's torturing little babies for fun is immoral. But, if that's right, then its being the case that Sam's torturing little babies for fun is immoral implies that Same ought not torture little babies for fun, and this is regardless of whether or not he can refrain from doing so. So, if that's all right, then one can be obligated to refrain from doing A even if one cannot refrain from doing A.

Now, you could say that it's *not* the case that Sam ought to refrain from torturing little babies for fun since he can't do otherwise, and so--given the conditional that if it's immoral to A, then one ought not A--it's not immoral for Sam to torture little babies for fun. Which conclusion means that it's not necessarily immoral to torture babies for fun. Which conclusion shows, I think, that the morality of torturing babies for fun depends on Sam and what he can do (I think Ish thinks something like this--"wrong implies can", if I'm not mistaken). But that strikes me as some sort of moral relativism. And I don't like that. If you share my dislike for that conclusion, then I think you should conclude that one can be obligated to refrain from Aing even if one can't do anything else but A.

What do you think about that?

Justin, if the word 'ought' is ambiguous in that it has more than one meaning, why not simply clarify what those meaning are with more precise phrases, and then use those instead, to avoid the double-meaning?

For example, can the axiological version of 'ought' be replaced with the phrase 'it would be ideal if'? If yes, then 'You ought to give money to the museum' becomes 'It would be ideal if you gave money to the museum.'

Can the deliberative version of 'ought' be replaced by the phrase 'it would be a violation (of some rule, law or moral precept) if you did not' do x? If yes, then 'You ought to give money to the poor' becomes 'It would be a violation of a moral precept if you did not give money to the poor.' And 'you ought not hit your mother' becomes 'it would be a violation of a moral precept if you did hit your mother.'

Once the different meanings of 'ought' are made explicit, it becomes explicit that only the deliberative meaning involves violations of or adherence to moral precepts. It would seem strange to argue that one is preserving morality under the axiological meaning, if that meaning has nothing to do with violations of or adherence to moral precepts.

Hi John, thanks for the reply!
You say: "The tie between moral notions, such as ought and having a sufficient reason (an objectively sufficient reason) to act and alternative possibilities is not as straightforward as you here imagine".

Point well taken and FWIW I spend a good deal of time in my dissertation defending OIC so apologies if I came across as if the tie was straightforward and without criticisms (I’m familiar with your criticisms (2003; 2007?) and more recently by Moti Mizrahi(https://www.academia.edu/318110/Ought_Does_Not_Imply_Can) as well as older papers by Sinnott-Armstrong and others). In fact, I'm presenting a response to Peter Graham's (Phil Review 2011) counterexample to OIC at the central APA next month. Anyway, let me offer a quick response to the case(s) you describe.

You say: “Consider:
If S ought not do have done A, then S could have refrained from doing A.

Suppose S is driving in a dangerous area, and he knows that he really would need to look around to make sure that there aren't animals coming toward the road, etc. But nevertheless, for his own reasons (say, of laziness), he keeps his eyes focussed straight ahead. In my view, he ought not keep his eyes focussed straight ahead; he has sufficient reason not to keep his eyes focussed straight ahead.”

Now when you say “he ought not keep his eyes focussed straight ahead” are you referencing the ought connected to moral obligation; the ‘ought’ of agent specific demand as Pereboom calls it? Or, are you using the ideal ought; the ought of axiological recommendation? If the latter then I have no qualms. Though, I think such a use is bankrupt, or at least leaves us wanting as I tried alluding to with the museum example. But if the former, it seems you’re suggesting the former, then I don’t see this as a counterexample to the connection I claim holds between moral obligation and alternative possibilities. It’s don’t believe it is the case that S “ought not keep his eyes focussed straight”. Why? Well to say to S that he ought not keep his eyes focussed straight is to demand something of him that he cannot perform. It’s akin to me demanding that my 2 year old Benjamin type out this reply to you. It seems odd, even unfair as you alluded to in your 2003. For me, OIC is grounded in considerations of fairness. To demand that someone perform some action in the future that they cannot perform seems unfair. Likewise when we make claims about what one “ought not have done” as is the case in the example you provided (in the past) this presupposes that the agent had a demand against them; that the agent was given a specific agent demand to refrain from A’ing. But such a demand seems unreasonable or unfair if the agent can’t but A.

Also, I’m not sure that I would agree that S “has sufficient reason not to keep his eyes on the road” in your example, though I’d like to think more on this. Can one genuinely *have a reason* not to stay focused if one is paralyzed in the way you describe? Sure the reason exists but in what way is it *the agents* if it’s not something that can motivate one to act? And if it’s not the agent’s reason, then how is it a sufficient reason for *the agent* to comply with the demand? I’m thinking that for a reason to be an authentic reason for *me* it must be a reason that I can act on in a strong sense.

Ryan, thanks for the reply (and comic).

I'm glad you also see the loss of the 'ought' of specific agent demand as significant! That said, I'm a bit concerned about your example as it seems that you're sneaking in some "choice" terminology.

You say that the Leroy case validates OIC but then the Leroy case is "one that's very different from cases in which the outcome *does* depend on what the agent CHOOSES (my emphasis)". But I'm not seeing how they are different. If determinism, then how does Larry *choose* in any relevant respect? And if *Larry* isn't the source of the decision, if the source can be traced back to factors beyond his control, then in what way is it cogent to obligate him to save the man but not Leroy?

Hi Alan! Thanks for the kind words and thanks for the reply. I haven't even thought about an ability requirement for recommendations.

Roger, thanks so much for the reply. Very thought provoking! I have a bunch to say in response but will wait until tomorrow (well, later today). It's about time to get some shut eye for me. But quickly, yes - ish endorses wrong implies can. No - this need not lead to relativism.

Thanks for your patience I will be sure to give you a much more detailed reply tomorrow as you brought up some very good points and thus feel required to respond appropriately ;)

Peter, thanks for the response! It's refreshing to learn that I am not the only one who finds it "strange to argue that one is preserving morality under the axiological meaning, if that meaning has nothing to do with violations of or adherence to moral precepts". FWIW, and to his credit, Pereboom does a great job making it seem not so strange. If you haven't yet read "Free Will, Agency, and Meaning in Life" put it on your short list. Given what I read from you last month you'll disagree with much of what he says but I think you'll be taken aback by the clarity and complexity of his arguments. I'm currently reviewing his book and reading it for the 2nd time. One of the best I've read to date! If I were a skeptic, Derk would be my doppledanger, or whatever you call them.

Angra, thanks for thinking through this with me. I will respond tomorrow though much of what I said to John applies to the cases you mention.

Thanks for the reply Justin. Let me be clear - I am not trying to be sneaky about the use of 'choice' terminology, I want to be explicit about it. I think Larry does indeed have a choice to make, regardless of the truth or falsity of determinism. He has to engage in practical reasoning - he has weigh his options, decide which course of action he has most reason to act on, and choose one. One of the things it makes sense for him to consider is whether he should act on the moral obligation to save the drowning man. It makes sense for Larry in a way that it doesn't make sense for Leroy - because no matter how Leroy reasons or what he decides, the man will drown.

The same simply isn't true for Larry; the drowning man's fate depends on what Larry chooses (regardless of whether or not that choice can ultimately be traced back to factors beyond Larry's control). That's why I think it makes sense to obligate Larry to save the man, but not Leroy. That moral obligation can play a role in practical reasoning for Larry that it cannot for Leroy.

Justin, delighted to have you as guest blogger this month; you have launched the new year with a great post.
Ought implies can seems a philosophical superstition, coming forward from a time when many people (particularly Kant, but it’s also there in Plato and Aristotle) believed that there is a moral order to the world. When we think about it carefully, we know perfectly well that there is not such an order; but it is a deep nonconscious belief, so we rarely think about it carefully; indeed, we rarely become aware that we hold such a belief. But (a key example in the extensive psychological research on belief in a just world) like the supposition that the rape victim must have done something to bring this on herself (good people don’t suffer such terrible crimes in our just world), the belief that we can fulfill all our moral obligations is a manifestation of that deep but dubious belief. Ought statements remain substantive and important, even when we realize that we cannot always do what we ought to do. You ought to be kinder to your students: I may not have realized that I was being unkind; or I might not have recognized students as sentient creatures who benefit from kindness; or I might have thought that students were benefitted by my unkindness (it makes them tougher, and philosophy is a tough endeavor). Even if I recognize that at the moment I lack that capacity for kindness to students, the admonition might motivate me to take steps to enhance that capacity. But even if some profound but specific psychological disability makes it impossible for me to be kind to students (I’m unfailingly kind to kittens and puppies), “You ought to be kind to your students” remains a robust and important statement of moral obligation, and not merely an ought of “axiological recommendation.” My incapacity for kindness to students is evidence of a serious moral flaw in my character; the incapacity of the man in the wheelchair to rescue the drowning person indicates no such shortcoming, and thus the moral admonition makes no sense in that context.
Think of a case from my history, which I obviously cannot change. Suppose I was hauled before the House Unamerican Affairs Committee and grilled before that awesome and frightening and authoritative body, backed by the full panoply of power and authority; and in that setting I betray my friends and “name names”. Looking back, I recognize that I did something terribly wrong, and something that I certainly ought not have done (or if you prefer, I ought to have been steadfast and remained true to my friends, and I did not). It seems to me to make perfectly good sense (even if I am mistaken in my beliefs, my claim makes sense) to say: I ought not have betrayed my friends; what I did was terribly wrong; I realize that given my youthful history of deep obedience to and respect for authority -- taught by my culture, my religion, and my family -- I could not have done otherwise; nonetheless, I certainly ought to have done otherwise. In that case, the ought seems much stronger than a mere suggestion: it names a powerful obligation, and one that I failed to fulfill, and I recognize and deeply regret that due to my own character flaws I did something terribly wrong; but I also recognize that (given my history) I could not have done otherwise: like the subjects of Milgram’s experiment, I was not even aware of this profound flaw – I would have sincerely denied it existed – before I caved in to HUAC. Recognizing that I committed such a terrible wrong – which may be brought home to me when someone tells me “you ought not have named names” – is important, and may lead to efforts to improve my flawed character.

By the by, determinists can make choices so long as they are not "bypassed," in Eddy Nahmias' great description; but that's another topic.

Hi Justin,

Can’t someone accept that one should not be blamed for doing what they couldn’t refrain from doing without accepting OIC? If that’s right, then your beef with Pereboom seems less about OIC and more about whether we can merit criticism for the unavoidable (or something in that neighborhood).

This also avoids having parse multiple senses of ‘ought’ and the specificity of requirements and demands, etc. It seems to me to keep the disagreement simpler. (Or, if not amenable to Pereboom’s particular position, a potential skeptical position.)

Additionally, I wonder whether your fairness critique of Pereboom’s position isn’t part and parcel of thinking that losing the ‘agent specific demand’ (as you put it) would be a big deal. For if one didn’t think losing that demand-sense was problematic, and so viewed morality in different terms than the language of demands, then I’m less clear that it would seem unfair to, say, blame them for acting wrongly.

Of course, a lot hangs on how we conceive of the moral domain and the nature of blame, etc. But I take it there’s at least some wiggle room for how to properly connect up all these notions in a comprehensive moral theory.

I take it one route the skeptic can take is simply to claim that the truth of determinism + Kant's injunction that ought implies can = that in some important sense, determinism undermines any kind of responsibility that depends on the unconditional ability to do otherwise. For instance, because I think "deep desert" requires the unconditional ability to do otherwise and because I think that determinism undermines this kind of ability, I don't think humans have deep desert. So, my metaphysical commitments lead me to adopt skepticism about free will and to eschew retributivism as an adequate account of punishment. But I nevertheless think that in a deterministic world we can make sense of ought claims in some other important sense.

On this view, to say that someone ought to have done x (or refrained from doing x), is simply to say that the world would have been better had she done x (or refrained from x-ing) and x-ing (or refraining from x) was possible for the agent in the sense endorsed by compatibilists. So, while it makes sense to say that a neuro-typical adult should have x-ed (even in a deterministic universe), it doesn't make sense to say that someone who was cognitively or affectively impaired or under duress should have x-ed. In both cases, the agent in question wouldn't deserve to suffer for her moral transgression because the agents did the only thing "open" to them at the time of decision/action.
But there is nevertheless an important difference between the two agents that we can and should track.

For instance, while punishment wouldn't be deserved in either case, it might make sense in the normal case and not in the abnormal case. On this view, when I say the neuro-typical adult should have x-ed, I am highlighting several things at once: (a) it would have been better had the agent x-ed, and (b) the person is the sort of agent who had the conditional ability to do otherwise. I think (b) leaves open the door for justified punishment--which is where Derk and I part ways--even though the agent didn't have the unconditional ability to do otherwise at the moment of action. Perhaps I should call the two types of oughts in play here "unconditional ought" and "conditional ought." While determinism rules out the former--and hence rules out free will and moral desert--the later leaves voluntary action and some forms of punishment intact. If determinism is true, at best it makes sense to say that an agent who x-ed conditionally ought to have refrained from x-ing--which simply means that the world would have been better had the agent refrained from x-ing and the agent is the kind of individual who could have refrained from x-ing under similar circumstances. This latter upshot of the conditional ought is why I think punishment can be justified even in the face of determinism--namely, punishment can make it more likely that both the agent in question and other agents refrain from x-ing in the future.

I, for one, don't think this is a big deal--although if everyone were to adopt the view, it would amount to a radical shift in how we treat agents who do things we think they conditionally ought to have refrained from doing!

So many great comments and ideas to think about. This thread has been super helpful! I'll get to Matt, Bruce, and Thomas in a bit but first to those I left hanging. I apologize in advance if I ramble or if I get incoherent. Given the amount of comments I won't have time to edit them for clarity.

Roger, you say "if it is immoral to A, then one ought not A". I'd have to think more about this but my initial reaction is to resist this conditional. Suppose I have 2 options, both are immoral in that they fail to meet whatever condition I have that makes an act moral. Surely, I should choose the act that is less moral. If this is true then I ought to do some immoral acts. Put differently, one ought to do the best one can. But if all one can do is one of a number of immoral acts, then one ought to act immorally given their set of options. Here, I'm thinking of Bernard Williams case of Jim and the Indians (I'm not fond of the example's name...). One might think that Jim does something immoral by killing the one soldier to save the 20 but still he ought to do it. Similarly, others might think it is immoral to walk away and allow the general to kill all 20 but given Jim's moral psychology and his life's plans he ought not kill the 1 even though it would be immoral to allow 20 to be killed by the general. To be clear, I'm not rejecting your conditional, I'm simply pointing to ways one could resist it. Also, one could just bite the bullet and say that in the wider scope of morality sometimes one must act immorally for other considerations, such as love. Susan Wolf has a few examples that could be helpful on this front. In the cases she has in mind obligations of love sometimes trump moral obligations. Thus, one could reject your conditional on those grounds as well. There are certainly responses to these push backs but I think the conversation would lead us to our preferred ethical system and we would hash out the differences there. Suffice to say that there are ethical theories that could endorse decision procedures where it could sometimes turn out that the conditional is false.

Another consideration is that of moral agency. Is a person who can only do one thing a moral agent, or an agent at all? I'm not sure that she is. If not, than the act in question, torturing moral babies, might not even be an act, let alone an immoral one. It's more like a happening. Happenings can be bad or good depending on the case but it would be false to describe such happening in deontic terms, IMHO. Helen Steward's new book was excellent and I think I'm convinced of agency incompatibilism (this is a post I'll put up later this month so I welcome some feedback there as well if you had the time).

Lastly, you say:

"Which conclusion means that it's not necessarily immoral to torture babies for fun. Which conclusion shows, I think, that the morality of torturing babies for fun depends on Sam and what he can do (I think Ish thinks something like this--"wrong implies can", if I'm not mistaken). But that strikes me as some sort of moral relativism. And I don't like that. If you share my dislike for that conclusion, then I think you should conclude that one can be obligated to refrain from Aing even if one can't do anything else but A".

This is thought provoking! I'm not a relativist but I do have particularist and virtue ethical leanings. That said, I would want to resist the move from something being right or wrong depending on the agent to relativism. Just because I don't admit that torturing babies is necessarily immoral (I think it almost always is), it wouldn't folllow that I'm a relativist, would it? Virtue ethical theories focus on the agent's particular circumstances and character to posit what one ought to do but I do not believe that VE is a relativist theory, at least not the sort of relativism I worry about.

Relativism of the sort I want to reject states something like "no ethical viewpoint is uniquely privleged over others". I'm not seeing how wrong implies can (WIC) commits one to this view. I can still hold on to a world view that says one ought not kills babies (general claim) and one who agrees with me I would privilege over others who do not.

Like Haji, I endorse WIC. For me, it just doesn't make sense to say to something like it was wrong for you to do X if you could not have done other than X. It's like telling a robot it should not have performed it's program or that it was wrong in performing it's program. It just seems out of place. I hold that things can be bad but bad is different than wrong.

Now, one could say that explaining right and wrong is good for moral formation and on those grounds it would be warranted to say that what one did was wrong but in my next post I'll tackle the forward looking account of blame and try to raise some issues for those who endorse it.

Thanks again for your examples and questions!

Ryan, thanks for clarifying.

You say: "I think Larry does indeed have a choice to make, regardless of the truth or falsity of determinism. He has to engage in practical reasoning - he has weigh his options, decide which course of action he has most reason to act on, and choose one. One of the things it makes sense for him to consider is whether he should act on the moral obligation to save the drowning man. It makes sense for Larry in a way that it doesn't make sense for Leroy - because no matter how Leroy reasons or what he decides, the man will drown."

Let's look at this from a distance. We know that Larry will not save the man, let's stipulate that to put him on par with Leroy in that both men cannot save the drowning man. Thus, no matter how he reasons it will result in the man drowning. I'm not seeing how determinism isn't doing the same thing that the paralysis is doing, preventing the man from being saved. Since it's not up to either one to save him I don't think either is obligated to save him. I guess by now my leanings to source theories are apparent. (That didn't last long) Thus, given that we agree that Leroy is not obligated to save the drowning (because he cannot)man I'm not seeing why Larry is? If you want to say that it is open for Larry to save him, given his weighing of reasons and such, then it seems like you're appealing to alternative possibilities.

For me, the agent (Larry) isn't weighing anything at all. If determinism, then the weight is already assigned to reasons well before Larry was born which is why I don't think it makes much sense to say that he should have done other than what he did. I would have liked if he did otherwise and it would be ideal if he did but to say he *should* have implies (at least to my ears) that he could have.

Also you say "The same simply isn't true for Larry; the drowning man's fate depends on what Larry chooses (regardless of whether or not that choice can ultimately be traced back to factors beyond Larry's control)."

But if the choice is already made so to speak (due to factors beyond his control), if only one future is open to Larry and that future does not entail him saving the drowning man then how is he in any better a position to save the drowning man than Leroy?

Thanks again for the reply - I think maybe we're getting closer to the core of our disagreement.

So let's go ahead and assume that Larry in fact will not save the drowning man, and that his choice not to do so is causally determined (and that this rules out alternate possibilities). You say: "Thus, no matter how he reasons it will result in the man drowning." But this doesn't follow. If Larry had reasoned and decided differently, then he would have chosen differently, and the drowning man would have been saved. As Thomas put it, he has the conditional ability to save the man. Leroy doesn't have this conditional ability; even if he had reasoned differently and chosen differently, the man still would have drowned.

Even assuming that in some ultimate, metaphysical sense it is open to neither Larry nor Leroy to save the drowning man, it is still the case that Larry, at the moment of decision, had reason (from a practical, deliberative standpoint) to consider a moral obligation to the drowning man. Whether he does or not determines the fate of the man (even granting that whether he does or not is itself determined by other factors). Leroy, given his lack of conditional ability, didn't have any reason to consider a moral obligation to save the man; whether he does or not, the outcome is the exactly same. This, at least practically speaking, makes a huge difference (in much the way that Bruce nicely explains). Whether determinism is true or not, I think it makes sense, as deliberating agents who make choices, to consider all kinds of reasons for actions, including moral ones. And I think it makes perfect sense to maintain that even if you want to give up on the idea of moral responsibility or deep desert (which I don't, but that's a whole other topic).

But it seems that maybe what we're ultimately disagreeing about is whether people can really deliberate or make decisions at all given the assumption of causal determinism. It seems pretty clear to me that we can. I think this is something Pereboom takes up in his latest book as well, but I have to confess that I haven't got entirely through it yet (something I need to remedy soon). If you do think that Larry isn't really deliberating or choosing at all given the assumption of determinism, then I'd be curious to hear more about why you think so (but maybe that could be a whole separate post of its own).

Thanks for the response, Justin.

You say, "Suppose I have 2 options, both are immoral in that they fail to meet whatever condition I have that makes an act moral. Surely, I should choose the act that is less moral. If this is true then I ought to do some immoral acts."

OK, cool. I think this is what a lot of folks would say. But, I don't like that way of looking at things; it denies the existence of moral tragedies which, I think, are what make certain morally loaded moments so heart-rending. I'm thinking, for example, of the scene in the new movie "American Sniper" (I've not seen the movie, but the seen to which I'm appealing is in the previews--hopefully you'll have seen it) where the protagonist (the sniper) has, in his sights, a small middle eastern boy who happens to be carrying a, possibly live, RPG. The situation is such that, if the RPG is live, and the boy is intending to kill the US soldiers, then the sniper's job is to kill the boy. But he doesn't know if either the RPG is live, or the boy is intending to use it against the US soldiers. What to do?

I think a situation like this is emotionally charged for more reasons than the fact that a little boy might get shot. I think it's emotionally charged because the sniper is in a position wherein he can't do anything morally right; he has to do something morally wrong; he either shoots the kid and saves his platoon, or he lets the kid live and his platoon dies (or he kills the kid wrongly believing his platoon is in danger). There's nothing good that's to be done. Here, I think, the sniper is in a morally tragic situation. So, I think that whatever he does, he does something wrong. A lesser of two evils is still evil, no?

Now, I think when you want to say what you say in the passage I quoted, above, I think what you're feeling (forgive my bit of psychologizing, here, I'm just making a guess--feel free to refute, obviously!) is that you (or the sniper, say) aren't *morally responsible* for what you do. And I agree! MR requires the ability to do what you're required to do (or refrain from doing). But morality doesn't require anything like this (i.e. wrongness doesn't require my ability to do anything). Morality--that is, whether or not something is immoral--doesn't seem to depend on me, or my situation, at all. I think if it did, morality would be relative, and lose its deep pull on us. (We'd also lose any deep notion of moral desert, which is something the skeptic would be in favor of.)

(BTW, I say morality would be 'relative' because whether or not A is immoral would be relative to my situation. Wouldn't it? If I can't refrain from Aing, then it's not immoral for me to A. That sounds like the morality of A is relative to my situation. I.e., it sounds like morality is relative on this view.)

You say, "Like Haji, I endorse WIC. For me, it just doesn't make sense to say to something like it was wrong for you to do X if you could not have done other than X. It's like telling a robot it should not have performed it's program or that it was wrong in performing it's program. It just seems out of place. I hold that things can be bad but bad is different than wrong."

Can a robot commit murder? If it can, then, if murder is necessarily immoral (isn't it?), then the robot can do something immoral. However, I think what we should say, in an instance like this, is that the robot isn't MR for what it's done. Now, if a robot can't murder (mere animals can't murder, after all), then a robot can't, in this instance, do the immoral act. It can kill, all right, but that's not to say it's done anything immoral.

Bruce, thanks for your kind words! It's exciting for me to be chatting with you all about these topics and I am very grateful to all who are chiming in and to those who may chime in in future posts. I'm coming down the home stretch with my dissertation writing so all of these comments are very helpful to my progress.

That said, I'm not sure I get what you mean by the "moral order". You seem to be suggesting that OIC comes from some just world view but I'm not seeing it. And, even if it did arise as you say, I don't see why one can't give up that view while still clinging to OIC to make sense of cases like Leroy. I do not believe that the world is just, in fact, I think it's much more unjust than just. I do not believe that rape victims "get what they deserve" when they are victimized, quite the contrary. That said, I don't see why proponents of OIC are committed to the abhorrent just world view you describe. Maybe you could cash that connection out a bit more for me? To be clear, I think a world ought to be just, speaking ideally. Following Aristotle, I see justice as an important virtue and one we need to contemplate often to flourish as human beings but it does not follow (at least not clearly) from that that I should eschew OIC. Even acknowledging that we do not meet our moral obligations (I agree we often fail) it wouldn't follow that we cannot. For me, I struggle to make sense of justice in a world without the ability to do otherwise. Justice seems to bottom out in some consequentialist rendering and given our epistemic limitations this is very concerning to me. I'll be posting on these concerns next so I'll save my deeper concerns for that thread.

Now, you say "Looking back, I recognize that I did something terribly wrong, and something that I certainly ought not have done (or if you prefer, I ought to have been steadfast and remained true to my friends, and I did not). It seems to me to make perfectly good sense (even if I am mistaken in my beliefs, my claim makes sense) to say: I ought not have betrayed my friends; what I did was terribly wrong; I In that case, the ought seems much stronger than a mere suggestion: it names a powerful obligation"

I agree that if one is not positing FW and/or MR skepticism that it makes perfectly good sense to say "you ought not have betrayed your friends". However, if one endorses FW skepticism then the claim doesn't hold because of OIC. You didn't have an obligation to refrain from doing what you did because you couldn't have refrained. While I agree that the recognition of the harm you may have caused can affect future behavior it doesn't follow that one can demand that behavior of you, or that you are warranted in feeling that you did something that you should not have done. To say otherwise would be to understand obligation much differently than I do. Would you think it was appropriate for Leroy to feel as you did about your scenario? Would it be true that he was obligated to save the drowning man? If not for OIC then how would you make sense of the claim that Leroy is not obligated to save the man? Would you bite the bullet and say he was? Maybe they find a cure for his condition in the future and Leroy feeling bad for not saving the man will help to form his character in a way that allows him to save men when in similar situations in the future? This line seems suspect to me.

To be clear, when I hear you reflect on your case I would say that the sentiment is helpful in forming your future self to act differently in similar situations. But, to say that you should not have done what you did would be a stretch and an illusion.

I would say "I wish that back then I was more like I am today so that I could have been obligated to stand by my friends. If I was obligated to do so I would have done it because now I realize how much more important my friends are to me."

Also, I should add that the 'ought' of axiological recommendation is not void of helping us form ourselves it just does a shittier job on my view. All is not lost because we lose the 'ought' of specific agent demand. It's the combination of the traditional 'ought' and a bunch of other concepts, a few of which I'll post on over the course of the month.


Hi Thomas, and thanks for your comment.

I agree that The conditional 'ought' remains in the wake of skepticism. Such an 'ought' does not imply can, although Alan's comment suggested that it may. I'm curious though, since you are a skeptic and you think desert claims are unwarranted due to incompatibilist considerations (the loss of the unconditional ought), why aren't those same considerations doing work for you when thinking about obligation? I find it peculiar that skeptics like yourself and Pereboom (and others) are incompatibilists when it comes to desert based claims and FW but compatibilists about

I do have some concerns about the conditional 'ought' though. First, on your view we would be failing our obligations nearly all the time and this runs the risk of rendering such ought claims meaningless or at least far less motivating than the traditional, unconditional ought as you describe it. For instance, the world would be slightly better if I gave 5 bucks to charity instead of 4. Thus when I give 4 I fail an obligation (assuming I had the extra buck and I had the conditional ability to give the extra buck). So, the forward looking importance (I'm assuming this is one feature that appeals to you) might lose it's hold if we recognize that any given failed obligation was just business as usual.

Also, the conditional ability you appeal to would render Leroy obligated to save the drowning man, wouldn't it? Leroy has the conditional ability to save the man. Just like Elton John has the conditional ability to play piano when he is in an airplane Leroy has the conditional ability to save the man when he is not paralyzed. There is a world in which he is not paralyzed or a world in which he suddenly regains feelings in his leg. Let's stipulate that there is a 40% chance that he will suddenly regain feeling in his legs one day, all at once. Is he now obligated to save the man even though he does not have the specific ability at the time the man is drowning?

Now, given what you've said you might be okay with this result. Given that the conditional 'ought' ends up being quite revisionist when compared to the unconditional 'ought'.

Anyway, thanks again Thomas. I'm looking to post on forward looking blame as well as punishment before the month is out so I look forward to hearing your thoughts if you have the time.

Justin,

Thanks for your reply.
Briefly, and with regard John's example, I would suggest the following assessment: it's not the case that S ought not to keep his eyes focused straight ahead, but on the other hand, S ought to have chosen not to keep his eyes focused straight away. He would have failed to move his eyes if he had made that choice, but he still had a moral obligation to make that choice, and thus (in the moral sense of "ought"), he ought to have made it.
This assessment is compatible with OIC, but also respects the moral intuition that S is behaving immorally in John's scenario.

Similar considerations apply to the scenario S2 (i.e., Leroy has a chip in his head, etc.).

There's a lot in here, both in the starting post (thanks Justin!) and in the comments. One question I've often had about the axiological ought is just how different it ends up being from the obligating ought.

Consider the Leroy case. Assume for the argument that the obligating ought is limited by can, and so assume that this means that it is not the case that Leroy obligating-ought ought to save the drowning man.

What about the axiological ought? Is it the case that Leroy axiologically-ought ought to save the drowning man? What would make that the case? I assume one axiologically-ought ought to do something if and only doing that thing would make things better in some important way than not doing that thing. Well, saving the drowning man is presumably better than letting him drown, and so it looks like Leroy axiologically-ought ought to save the drowning man, and it also looks like Leroy axiologically-ought ought not to let him drown.

But, and here's the thing: why stop there? Going through the experience of nearly drowning is traumatic. Assume that the man would be better off having never gone through the experience. Should Leroy axiologically-ought travel back in time and prevent the event in the first place? Maybe so! Of course, that is impossible, but impossibility wasn't supposed to be constraining our axiological oughts. So now it looks like Leroy axiologically-ought ought to go back in time, and therefore it looks like Leroy axiologically-ought ought not (!) to save the drowning man.

When I think about the obligatory ought, it seems to be the sort of thing that churns out one obligatory action (or, perhaps, a set of equally obligatory actions among which obligation gives me no reason to prefer one to another). When I think about the axiological ought, by contrast, it seems to give me some mammoth ranking of all logical (conceptual?) possibility!

This is not to urge some conclusion here. But it is to urge hesitancy (a hesitancy I think friendly to Justin's skepticism). Given that axiological and obligatory oughts are different sorts of things, we should be especially careful thinking the one is a ready substitute for the other.

Hi Matt, thanks for the comment.

I'm having a tough time seeing the force of your suggestions about simplicity (likely me, I'm pretty beat and had some wine with dinner). Maybe if I give a (very) brief summary of my project it will help clarify why I want to pursue this line of argument rather than couch it in terms of blame or criticism as you seem to suggest. I think that acknowledgements of mutual moral obligations do much more work for us than just merit criticism, though for a nice back and forth between Nelkin and Pereboom regarding the importance of understanding ‘oughts’ of specific agent demands (Pereboom’s term) see Pereboom's chapter on blame and obligation in his (2014).

My beef with Pereboom, as you describe it (which I kinda like), is much deeper than the OIC debate (or even blame), or at least I'd like to think so. I'm trying to show that folks who endorse skepticism (hard incompatibilism and Pereboom more specifically in the diss) are committed to the denial of the truth of traditional deontic evaluations (moral obligation, wrong, and right) as well as appropriate overt blame, appropriately directed reactive attitudes in general with a focus on forgiveness, gratitude, and reciprocal loving relationships. Couple those claims with similar claims about aretaic evaluations (proper pride) and agency incompatibilism and you lose many of the truths that we rely on to hold together our relationships and to interact meaningfully in our world as distinct moral beings living with other distinct moral beings, or so I argue.

I do all this because I have an epistemological argument I’m working on that is focused on stakes. I’ll save the details as to not get too far off track but roughly, by raising the stakes in the free will debate I make the claim: we ‘ought’ to believe in FW skepticism (understood as the ability to do otherwise, or the ability to “settle”) false. Oh, and there must be a plausible agent-causal view is on the table (which Pereboom admits himself there is even if it’s at the very corner of said table). Anyway, the more I can disagree with him on re: the second half of his (2001) and (2014) the more that helps my larger project against FW skepticism more generally.

So, my qualms with Pereboom are very specific (pun?) with regards to obligation because the details matter and given that I believe that other important concepts and practices (the one’s I mentioned earlier) are importantly tied to our beliefs about traditional moral obligations (and not blame per se), I want to be clear on what his analogue to traditional moral obligation is so I can see if it is doing the work that he thinks it is.

Hopefully this made sense. But if the missed point feel free to reiterate it for me. Having to write through this and explain my position is helpful in itself.

Thanks for that excellent comment, Craig. Well stated! I'm glad that you at least recognize the cause for hesitation. Many seem quite ready to get rid of traditional moral obligation in a flash. I'm a bit taken aback to be honest (though not with John's example because I'm well aware of the disagreement he has from reading his work).

If I may ask, do you believe that the 'ought' of axiological recommendation is an adequate replacement?

Angra, I like your proposal. I think you're right to couch it in those terms as I think those are the terms I mean when I say "S 'ought' to do A" (because of some issues that arise re: resultant luck).

The comments to this entry are closed.

Books about Agency


3QD Prize 2014: Marcus Arvan