Blog Coordinator

« Plan for the Month | Main | X-phi survey »

03/02/2015

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Eddy, thanks for explaining. I'm glad to see that we agree that our reactive attitudes and associated practices are internally revisable in light of philosophical and scientific considerations. While we don't agree on global skepticism, our agreement opens the door for an interesting and important conversation about the consciousness thesis. I think the kinds of questions I raised in my post are important precisely for that reason. Defenders of basic desert moral responsibility should acknowledge, as you do, that there are empirical constrains on (say) real self and control-based accounts of moral responsibility. I'm interested to hear from other proponents of such accounts as to whether they accept the consciousness thesis (or something like it) and what they think of type-II and type-III cases (I'm assuming agreement about type-I cases). Thanks again. (Maybe later in the month we can return to the issues you and Tom raise.)

Gregg,

Thanks for the thought-provoking reply.
With regard to your question about my point and epistemic doubt, I don't think my point would rule out all ascriptions of moral responsibility as inappropriate (by the way, is it equivalent I talk about "blaming" instead? I'm not sure I'm getting that part right).
For example, let's say that armed robbery is identified within a population - we know it's happened. That's of course not enough to blame a specific person for armed robbery. But there are circumstances in which one can tell that a specific person committed armed robbery, beyond a reasonable doubt. In such cases case, I think blaming that person is generally appropriate (assuming there aren't circumstances that justify the robbery in question and that one knows or should know about, etc.).

But I don't mean by this that the matter has to be taken to the courts before one can properly blame, because there is a difference between having enough info to tell, beyond a reasonable doubt, that someone did something, and having provided that info to a judge or jury, in accordance to some procedures.

For example, I think it's appropriate to say that Stalin is guilty of murder and of depriving people of their freedom without good cause, even if Stalin was never tried for that.
Also, if a person is attacked and robbed by an armed man, I think she's usually justified in blaming the attacker. And if she's the victim of fraud, she's also usually justified (if she has sufficient evidence) in blaming the con artist.

There are, however, plenty of cases in which blaming is unjustified, because people jump to conclusions about the minds of the people they blame (not just involving type-2 or type-3 situations), and/or because they make certain religion-based moral assessments, and I don't know what percentage of cases of blaming is justified. My tentative assessment is that most cases of blaming are probably justified, but cases in which it's not are much more common than most people seem to realize.

That aside, there is a question of when blaming attitudes are appropriate, in cases in which one does not have enough to establish the matter beyond a reasonable doubt, but there is, say, preponderance of evidence. What if one can tell that there is an 4/5 chance that a person did it? (in most cases, one is in no position to give specific numbers, but reckon it's very probable, etc.)

While I think one should assign that probability and that's it (so, for example, reckon that someone is probably guilty, or very probably guilty), there are difficult issues regarding the appropriateness of the attitude of disapproval usually involved when blaming a person. I don't have a theory of that; I would have to consider the matter on a case by case basis (as I make moral assessments in general, I concede).

With regard to implicit bias in cases in which there are equally justifiable grounds for preferring one candidate over another, I tend to think that as long as the grounds are justifiable based on the info available to that person, the behavior is not immoral.

In the streetwise vs. formal education case, I do agree that being streetwise or being formally educated are good reasons to consider a person qualified, and I think even those with a sexist bias did consider those criteria: for example, if the participants had been told that an applicant was neither formally educated nor streetwise, they would have readily concluded that the applicant was not qualified, gender aside.
What I think is (probably; it depends on the specific situation of the person making the assessment) not justified is the assignment of some weights, like - in the study - assessing whether formal education is very important, moderately important, or slightly important, compared to other criteria like streetwiseness (not without first doing some research, at least) - that's regardless of whether they're implicitly biased on the basis of gender, but gender bias may be one of the causes of such improper evaluations, when it's present.

On the other hand, you make a good point that they were only asked. Maybe they were just taking a guess and would have said so if prompted. In that case, I think they did nothing wrong - not even those who were gender-biased. Still, what may be wrong on their part is to be generally so biased and not make a sufficient effort not to be. That might depend on whether they have enough info to suspect they are so biased, whether they can figure it out, etc. But I think other people do not generally have enough info to tell whether a specific person has such a bias, or is behaving immorally if he does.

In the TT philosophy position, perhaps a way to make an assessment would be to read the papers and make an assessment. In the end, one has to go with one's own intuitions (broadly constructed) of course, since there is nothing else. But what if those intuitions are in turn affected by implicit bias in a way opaque to the agent?

I think the matter usually hinges on whether the agent is capable by means of normal effort and attention to figure out he's biased (but maybe a greater effort is required if bias of some kind has already been identified as common within a relevant population the agent is a member of, or in some other cases in which there is specific reason to at least suspect bias). But in practice, even in cases in which he could and should figure it out - and so, even in cases in which he's guilty - I think there would usually (but not always) be insufficient information available to other people to blame him for failing to correct implicit bias.

Angra, thanks for your thoughtful reply! I fear that I won't be able to do justice to all your points but here is my quick initial reaction. I agree that the epistemic concerns you raise are interesting and important ones. I will, however, leave it to those who believe blame is justified (in a backwards-lookng sense) to say how they *know* (in any given situation) that an agent has satisfied the necessary conditions for such blame. If I were to put the concern in terms of real self and control-based accounts of MR, the questions would be: How do we know the agent's behavior was a reflection of their real self (there evaluative agency)? And how do we know the agent was (say) moderately reasons responsive at the time? I have no theory in the offering so I will leave it to others to take up your challenge. I take it, though, that this is a general concern that would run across many positions and isn't unique to the concerns I have raised.

As for implicit bias, it's important to distinguish implicit biases from explicit ones. I think in your examples you may be alluding to the fact that the agent might have both. If one holds an explicit bias--one that they have evaluated against their other personal-level attitudes--then that is a different story! One of the things that makes an implicit bias and implicit bias, I think, is that the attitude is opaque to the agent and is not evaluated against their other personal-level attitudes. It's for that reason that I find it hard to attribute it to the real self.

Gregg, very interesting post. These cases highlight the flaws of our all-or-nothing approach to moral responsibility. In all three cases, mitigation seems like a more plausible response than complete exoneration (especially since a significant percentage of participants in each case were unaffected). Another example: if I was a participant in the Milgram experiments, it seems appropriate to feel guilty (and deserving of blame) for going to 450 volts. But it's also appropriate to feel a little less guilty once we learn about the situationist factors likely played a large role. The consciousness thesis is too strong to serve as a necessary condition, far more plausible as a mitigating one.

Tamler, thanks for your comments. Agreeing that these situations are mitigating of blame is a start (I'll take what I can get) but I'm curious why you think the consciousness thesis is too strong. I think Neil makes a strong case for the integrative function of consciousness and its important to moral responsibility. And I agree with him that the two leading accounts of moral responsibility—real self (or evaluative accounts) and control-based accounts—are committed to the truth of the consciousness thesis despite what proponents of these accounts maintain. Perhaps you can spell out your reasons for rejecting the consciousness thesis more fully.

As for the Milgram experiments, it may be that although there were situational factors involved, the agents were still morally responsible because they did in fact satisfy the consciousness thesis--that is, they were conscious of the morally significance facts that gave their actions its moral valence. Not all situationist cases, I take it, would fail to satisfy the consciousness thesis. (I would have to think more about the Milgram case to see if I agree with what I just said--but that's one possible way to distinguish that case from the ones I discussed in my post )

Gregg, I don't see how the participants in the Milgram experiments satisfied the consciousness thesis any more than the Israeli judges or the briefcase people. In all of the cases, they're conscious of the harm they're causing. They're just unaware of the factors that are influencing their decision to harm. But just to make the parallel cleaner, think of the Milgram variations where the scientist would wear a white coat (which would make the mean shock go way up).

And yeah, maybe the real self and control-based accounts—are committed to the truth of the consciousness thesis. But I take your examples (to the extent that they really are committed to it) as more evidence of the limitations of those theories and really any theory that tries to pin down necessary and/or sufficient conditions for MR. As for why I think that, it goes back to what I said before. It would be appropriate for me to feel guilty even if I knew about the situational factors. And appropriate for someone to resent me. (Tom, don't say it! It's not for consequentialist reasons.) Which is of course why I think that Strawson's view has to be counted as one of the leading theories of responsibility!

Tamler, I'm not committed to saying that the Milgram experiments are different (again I would like to think more about it), but here is one possible way in which they could be different. You said that "In all of the cases, they're conscious of the harm they're causing." I would disagree (at least not as I described the two cases I discussed above). As I said in an earlier comment, why I think you cannot blame the agent for being selfish in the briefcase experiment (or for being rude in the priming experiment) is because he was not conscious “that he was acting selfishly” (under this or a similar description). When he evaluates his behavior against his other personal-level attitudes, he assesses a confabulated set of reasons for action—the confabulation occurring because he remains unaware of the true cause of his action. The key point is that he does not see himself as acting selfishly (which I think is plausible to assume in such cases). I think that might be different in the Milgram experiments.

Of course, a lot rides on the empirical details of these cases—and I am totally fine with that. What's relevant, though, for the consciousness thesis is awareness of the morally significant facts that give an action its moral valence. I want to leave it as an open empirical question the extent to which agents fail to satisfy the consciousness thesis.

As for your skepticism (or metaskepticism) of all necessary and sufficient conditions for MR, I'll have to leave that for another day ;)

Gregg, you say that "The key point is that he does not see himself as acting selfishly" in the briefcase and rudeness cases? But what's your basis for that? That they didn't know that the briefcase or priming were influencing their behavior? That seems irrelevant to whether they knew they were acting selfishly. In any case, I still don't see any difference between Milgram and those cases except for the seriousness of the harm. (Which may be what's driving intuitions about the moral responsibility of the participants.)

Also, my criticism of theories with necessary and sufficient conditions is independent of the metaskepticism I defend in my book. Even if there were no cross-cultural differences in attitudes about MR, we should still be dubious of this methodology. For reasons your post highlight!

Gregg,

Thanks for the thoughtful reply again. I'm afraid I won't be able to address all of your points, but as someone who thinks blame is justified in some (many) cases, I'll try to address what I think is the main point of your reply - or more precisely to shift the burden, I admit -, and then a second one; please let me know if there is another matter you'd like me to address.

So, you raise the following challenge: "I will, however, leave it to those who believe blame is justified (in a backwards-lookng sense) to say how they *know* (in any given situation) that an agent has satisfied the necessary conditions for such blame."

I concede I don't know - I tackle the matter on a case by case basis, intuitively.
But I think what you ask is a bit too much to demand. Isn't that what humans normally do when making moral assessments, as well as many other kinds of justified assessments? (i.e., tackling the matters intuitively on a case by case basis)
For example, I know that there is a blue pen cap right in front of me, on the desk, next to the computer screen. But if you asked me to explain how I know that the pen cap has satisfied the necessary conditions for blueness, I'm not entirely sure I would succeed in giving the right explanation. I can say I have no sufficient reasons to reject the verdict of my faculties (in this case, the relevant part of my eye/brain system) regardless of philosophical challenges from color skeptics. I could even give it a shot and raise objections (if I have sufficient time) to said challenges, but in my assessment, generally people are still justified in holding color beliefs even if they are in no epistemic position to either explain how they know an object meets the necessary conditions or address the challenges from skeptics, and even if they come to know that some philosophers hold skeptical views (but if you're a color skeptic, please let me know and I will just pick another example instead of color).

Similarly (I think), I know that when some fanatics throw a man off a roof for having consensual sex with another man, and then stone him to death after he survives the fall, they behave immorally. If you were to ask me how I know that they meet the necessary conditions for behaving immorally, I would say that I see no sufficient reason to reject the verdict of my faculties despite the challenges from people holding different views (from moral error theorists to those who think it's praiseworthy to inflict the punishment in question), and I could even raise objections to their views, but as in the previous case, I think most people are justified in holding that the actions of the fanatics were immoral, even if they are in no position to explain how they know that their behavior meets the necessary condition for immorality, or to properly address a challenge from, say, moral error theorists.

Granted, philosophers specialized in ethics and metaethics plausibly are generally in a better epistemic position to assess such matters than other people are, and your questions are directed, I guess, at them (I'm shamelessly barging in!). However, given the variety of mutually exclusive views in both metaethics and first-order ethics, I reckon that perhaps even most philosophers specialized in the field(s) would not be able to come up with the right account. But regardless of whether they would be able to do so, I think they're justified in holding that the behavior of the fanatics in question is immoral.

Btw, even though you're a skeptic about FW and MR, if I'm getting your views right you're not a skeptic about, say, justice (btw, what's your take on moral wrongness/immorality? Do you think the actions of the fanatics are morally wrong?). So, if you agree that the actions of the fanatics described above (one can find very recent real-life cases if needed) are unjust, one might ask that you explain how you know that their actions meet the necessary conditions for injustice. Granted, you may have a theory on that, and your theory may be correct, but the point I would stress here is that even if you didn't have a theory, or even if you have a mistaken theory, you are still justified in holding that the actions of the fanatics are unjust.

So, with those examples in mind, I'll try to address blame:

In my view, by saying that the fanatics behaved immorally, I'm already blaming them, so if I'm justified in holding that they behaved immorally, I'm justified in blaming them. That is not the same (though it is related) as saying that I'm justified in having an attitude of disapproval against them, but I also hold that I'm justified in having such attitude - and I assess the matter intuitively too; I think the burden is on the skeptic here, as generally.

Still, if your question is about holding that they deserve punishment, then I hold that too, also intuitively, and I would also say I don't see enough reasons to reject the verdict of my faculties in a case like that.
*If* someone claims (I'm not claiming that you do claim that) that the case of the fanatics is relevantly similar to a case in which I do not believe that the agents deserve punishment (or behave immorally, etc.) - perhaps, some type-2 case -, I would ask them to support the claim; until then, I trust my faculties on the matter.

Another issue: You say: "I think in your examples you may be alluding to the fact that the agent might have both."
Yes, at least in some of the cases (I was trying to consider different options), but I'm suggesting that in at least many cases in which implicit bias is at play, also there is some explicit bias involved, and so regardless of whether the agent deserves blame for the implicit bias, they may deserve it for the explicit one.
That said, I don't actually blame any one of the participants of the hiring experiment, or one of the judges in the other experiment. I think I don't have enough information for blame. But I do blame the fanatics, as many other people, in the sense that I hold their actions to be immoral; additionally, I hold that they deserve punishment - I don't find the arguments against that view I've seen so far, persuasive.

Tamler said: "I don't see how the participants in the Milgram experiments satisfied the consciousness thesis any more than the Israeli judges or the briefcase people. In all of the cases, they're conscious of the harm they're causing".

That's the heart of my disagreement with Gregg. They didn't satisfy the consciousness thesis anymore than the others you mentioned. But they did satisfy it; my thesis focuses on the morally significant features of the action. Gregg is concerned with consciousness of the springs of our actions. So don't infer from considerations about the Milgram experiment - about which we agree - to the conclusion that the consciousness thesis is too strong. Gregg's claim is that it is too weak, because it gives the verdict you're arguing for.

Gregg--

Well, your first post has trumped my entire month in terms of comments! I'm Gaga over it! (As it were.)

The consciousness thesis must distinguish intelligible constraints as to (i) what must be (as necessary conditions) descriptively possible as factors in consciousness that are accessible as contents of consciousness in making properly evaluative judgments of responsibility assessments , and (ii) what may be such factors in consciousness (if any) that are not subject (as necessary conditions) to responsibility judgments as being properly accessible to consciousness. It seems to me that the only possible relevant distinction here is epistemic accessibility to consciousness of such factors along with some account of the propriety of each in either case. But both features of accessibility and propriety are conjointly necessary to delimit the presence or lack of the necessary conditions for responsibility assessment. Unconscious stuff not possibly accessible to consciousness in an actual case such as automatism fails that test. Possibly epistemically accessible stuff such as the judges' behavior--they might very well come to know the patterns of their own behavior over time--is more problematic. Unique instances of situational-causal influence (the briefcase) are not easily knowable or thus accessible in an actual case, and thus might well stand outside the consciousness thesis. But to place any such factors as inside or outside the normative boundaries of the consciousness thesis requires the propriety of any such stuff as possible or actual, and brings up the question of what matters as it relates or fails to relate to consciousness in making responsibility judgments in terms of possibility or actuality. If one presupposes incompatibilist/ultimacy values as part of what defines the necessary conditions, then a lot of what must count as not- or just-possibly epistemically accessible to consciousness is thus factually exculpatory as merely causal without producing conscious manifestations (automatism, implicit biases, unique situationism), but if one substitutes non-incompatibilist values as part of what defines the necessary conditions of responsibility, then one brings in more action-defined and reactive-attitude values (via the consciousness thesis of others as assessors as I said above, as one example) that assess just the actual conscious states of one who has acted immorally as in fact possibly responsible even though psychologically caused to act in a particular known way. Any valued possibilities here that are considered (but not actual-sequenced) in some contrastive way shows how the the actual sequence set of reasoning (or failure thereof) works counterfactually (epistemically and metaphysically) to sustain the value assessed originally in that actual case (e.g., reasons-responsiveness thus assessed). Of course these actual sequence and counterfactual values may need further specification as to epistemic or metaphysical relevance in terms of expressing true selves or whatever. And maybe this is all just what I called above valence-resetting for evaluation of the consciousness thesis. Or, maybe this is just restoring the proper central issue to the question of what is deemed valuable in assessing responsibility in the first place. In any case I argue that it is rock-bottom values that drive asserting the necessary conditions for the consciousness thesis, and the agent-privileged account of incompatibilist-type ultimacy of responsibility seem to be the ones that count here, and if denied, weaken the force of this version of the consciousness thesis as far as responsibility is concerned.

I'm going to have to keep this response short but I wanted to post a quick reply before heading off to teach my morning classes. (I go to sleep for a few hours and when I wake up there are several interesting comments!) Hopefully I can respond more fully later.

Tamler, I would just like to reiterate Neil's comments. I don't want my poor defense of the consciousness thesis or my extension of it to type-III cases to reflect negatively on Neil's excellent book! Neil disagree with me about the situational cases. He thinks agents in type-III cases do in fact satisfy the consciousness thesis. So I think he would agree with you about the Milgram experiments, but he would also agree with you about the other situationist cases as well. I, on the other hand, think that given a certain reading of those cases they may also fail to satisfy the consciousness thesis. I understand that my reading may be incorrect but that is one of the things I wanted to explore with this post.

Angra, I apologize for keeping my reply short. While I understand the intuitiveness of your examples I think they may be obscuring the deeper point (which I think we agree about). Your point about the pen makes sense, but when it comes to agency you would admit that important factors (factors we may not be able to settle epistemically at the time) would either mitigate blame or excuse it altogether? To take an extreme case, if you didn't know Kenneth Parks was in a state of somnambulism at the time he was stabbing his in-laws you would blame him, but once you realize that he is in such a state you would think differently. Likewise, if you saw someone do an immoral act but found out later that they were forced to do it (perhaps someone was holding a gun to their wife's head) you would make a different judgment about their blame. I thought your original point was that it is hard to know (at any given moment) whether an agent satisfies the necessary conditions for MR. I'm wiling to agree. But as I said earlier, that seems like a more general point that's a problem for all necessary conditions of MR not just the consciousness thesis. Your other points, I think, go beyond the scope of this post so I will hold off on responding now. They may be relevant, however, later in the month.

Alan, your comments require a more thoughtful reply than I have time for at the moment. If you don't mind I will try to get to them (and whatever other comments appear by that time) when I am done teaching for the day. (The unfortunate nature of teaching a 5/5 load!!)

Alan, thanks again for your comment. You write: "I argue that it is rock-bottom values that drive asserting the necessary conditions for the consciousness thesis, and the agent-privileged account of incompatibilist-type ultimacy of responsibility seem to be the ones that count here, and if denied, weaken the force of this version of the consciousness thesis as far as responsibility is concerned."

I'm unclear why you think the consciousness thesis is privileging "incompatibilist-type ultimacy"? I wasn't assuming anything about ultimacy or incompatibilism in my post (or at least I wasn't consciously doing so). Can you try to make your concern clearer for me (I'm probably missing something obvious here--but that wouldn't be the first time).

Perhaps your point is that I am starting off from the wrong end of the problem. I take it that you start by taking our reactive attitudes and practices as given(?) and work backwards. I, on the other hand, am interested in the metaphysical conditions necessary for moral responsibility. If this is what you are getting at, then yes we are coming at the problem from opposite directions.

Instead of arguing the merits of which approach is the correct one here, perhaps I will just say that this post is directed at those who are interested in discussing necessary conditions for MR and are interested in whether the consciousness thesis is assumed by real self and control-based accounts of MR (sorry Tamler ;)

Gregg,

"Your point about the pen makes sense, but when it comes to agency you would admit that important factors (factors we may not be able to settle epistemically at the time) would either mitigate blame or excuse it altogether?"
I agree that situations like that are more common when it comes to agency, but I still think there are plenty of cases in which we do have sufficient information to make a proper assessment.
In any case, generally my point is that one does not need to know how one comes to know that some conditions are met in order to be justified in making an assessment.
But if the color example is not close enough, we may consider the justice example: in the case of whether an act is just or unjust, also there are plenty of important factors that might affect the matter. But wouldn't you agree that, in many cases, we can still tell that a behavior is unjust?

"To take an extreme case, if you didn't know Kenneth Parks was in a state of somnambulism at the time he was stabbing his in-laws you would blame him, but once you realize that he is in such a state you would think differently."
Grated. But such cases (i.e., stabbing people while in such a state) are so rare that the previous assessment would have been justified, even if later evidence would lead me to change it.

But what if some people are throwing gay men from rooftops (and stoning them if they survive), stoning people for adultery, taking slaves, turning women into their sexual slaves, or force them to marry them, then raping them, etc., while openly claiming that those practices are morally acceptable and even morally praiseworthy?

Granted, some of them might now be forced to do such things or else they get killed by the others - which would mitigate in some cases but not eliminate the immorality of their actions-, but many clearly were not forced into that. They chose to travel to a war zone, and join IS, and there is enough evidence to make that assessment.

"Likewise, if you saw someone do an immoral act but found out later that they were forced to do it (perhaps someone was holding a gun to their wife's head) you would make a different judgment about their blame."

True. More precisely, I may well make a different judgment about whether their behavior was immoral. As for their blame, as I mentioned, if I still reckoned that their behavior was immoral, I think I would be blaming them (i.e., I think that "Agent A behaved immorally" is a way of blaming agent A), but if I no longer hold that their behavior is immoral, then I would no longer blame them.
Alternatively, I might make a different judgment about how immoral their act was, even if I still hold it to be immoral.

"I thought your original point was that it is hard to know (at any given moment) whether an agent satisfies the necessary conditions for MR."
One of my original points was that it was very hard to see whether implicit bias was present, but a later point was that it was difficult also in many cases to know whether an agent behaved immorally, and that mistaken attributions of immorality (or MR, if you like) are more common than most people seem to think (I would add that a significant portion of the problem comes from mistaken attributions of intent, and those often happen because people tend to think others have information more or less similar to the information they have; in the present-day world, that's very often not the case. I would expect that such mistakes were less common in the ancestral environment, in small communities of hunter-gatherers, though they made other mistakes that aren't common now).

Epistemic issues aside, and with regard to the consciousness thesis, I find it implausible.
One reason is the second condition, which states: "(b) we possess responsibility-level control only over actions that we perform consciously, and that control over their moral significance requires consciousness of that moral significance."
As I see it, there are plenty of cases in which choices would be immoral even if the person does not believe that the choice is morally significant.
In other words, it may well be that agent A believes that choosing X is morally permissible, and choosing ¬X is also morally permissible (and neither is praiseworthy, either), but even so, agent A behaves immorally in choosing X (and if you think that's not enough for MR, I would add they're guilty, they deserve to be punished, etc.).
For example, let's say that a member of IS believes it's morally permissible but not obligatory to force a Yazidhi woman to marry him, then rape her (which he wouldn't call "rape", but regardless), then divorce her, just for pleasure. He also believes that neither course of action is morally praiseworthy. So, in short, he believes the choice is not morally significant.
If he goes ahead and does that (i.e., he rapes her, etc.), I reckon he behaves immorally (and so, he is blameworthy, but if you don't think that blameworthiness follows from the fact that his behavior is immoral, then I also add that I reckon he is blameworthy), deserves punishment, etc.
However, that seems incompatible with condition (b), since he is not conscious of the fact that his choice is morally significant, so he would have no MR according to the consciousness thesis, and hence (at least, if I'm getting MR correctly), it is not the case that (he is behaving immorally and he is blameworthy and he deserves punishment).

If, as Levi argues, real self and control-based accounts are committed to the truth of the consciousness thesis, then given that my moral assessments in that context appear much clearer to me than either such account, I would conclude that both of them are very probably false.

Sorry, I misspelled "Yazidi" in the previous post.

Gregg,

With regard to the consciousness thesis, I'm not sure I'm getting the conditions right.

In particular, condition (b) states "we possess responsibility-level control only over actions that we perform consciously, and that control over their moral significance requires consciousness of that moral significance."

Let's say Alice is a consistent [substantive, not epistemic] moral error theorist.
So, she believes that nothing is morally good or morally bad, morally wrong or morally right, morally obligatory; that no one is morally blameworthy, and so on.
Then, Alice never meets condition (b), since she's never conscious of the moral significance of any facts.

On the other hand, in your précis, you say:

"The consciousness thesis is the claim that an agent must be conscious of (what she takes
to be) the facts concerning her action that play this important role in explaining its moral
valence; these are facts that constitute its moral character. (2014, 37)"

I was under the impression that the condition for MR stated is: 'For every fact F concerning A's action such that A takes F to play an important role in explaining the action's moral valence, A must be conscious of F".
But in that case, Alice always meets the conditions of the consciousness thesis, since there is no fact F that she takes to play such role.

So, in light of that, now I'm thinking that the statement in your précis shouldn't be interpreted as I did above above, but I'm not sure how to interpret it (or did I misunderstood condition (b))?
Could you clarify that, please?

Angra, in your example Alice's meta-ethical views have no barring on the consciousness thesis. If Alice is aware of the morally significant facts that give her action its moral valence, then she satisfies the consciousness thesis regardless of whether she's a moral error theorist, a moral nihilist, moral realist, or what have you. Those philosophical beliefs are not relevant to the kind of consciousness thesis I am considering. If she is conscious that she is stealing a candy bar without paying for it, and other conditions for MR are met (whether they be control-based conditions or real self conditions), then she's morally responsible. Hope that helps.

Hello Dr Causo and Flickers Community! I really enjoy reading this blog and am constantly impressed by the quality of the philosophy being posted here (especially since I assume you post this stuff during your "down-time")

I've spent much of the last year thinking about Levy's book, and am inclined to put forward the following proposal (despite having a hard time being able to principally disagree with the book): one might think that there are actually three versions of the CT one can put forward, where Levy puts forward a moderate (MCT) version and you put forward a strong (SCT) version, where the strength is measured in terms of how much an agent must be conscious of in order to be morally responsible for their conduct. However, why not think, for the purposes of moral responsibility, that instead we should accept a weak (WCT) version: an agent is required to be consciously aware that they are in a situation where moral considerations may be operative/important, in order for them to be responsible for their action in that situation? (This would allow us to accept CT but still let us "blame" at least some people in implicit bias scenarios, assuming that there is something to the "intuition" that blame is appropriate in this case) Just a suggestion, but it might make CT a bit less unpalatable for some.

Thanks for your time,
Francis

Francis, thank you for your post and your proposal. I see no reason for accepting WCT. In the implicit bias case, for example, I find it hard to say that the implicit sexism is a reflection of the agent's real self since they are unaware of it and are unable to evaluated it against their other personal-level attitudes. (This is assuming, of course, that the agent does not *also* endorse explicit sexism in hiring practices.) Following Neil, I also think the agent fails to exhibit responsibility-level control over this morally significant feature of their choice. For those reasons, I think a weaker CT would fail to capture what's relevant. Since real self and control-based accounts of MR are the best candidates we have for necessary conditions for MR, and (following Neil) they seem to be committed to the CT (in Neil's sense), I see no reason to accept a weaker version.

BTW, my original intention was not to defend a stronger thesis than Neil's. What I was trying to do (and probably failed) was to extend the range of cases that Neil's thesis would excuse. (I could, however, be deluding myself about my intentions...but don't blame me if I am.)

Francis, I think of myself in a scenario like Ullmann and Cohen (2005); the police chief case. Participants were asked how objective their judgments were: ratings of objectivity didn't correlate with being unbiased (and in some other experiments, like some of Keith Payne's work, asking people not to allow biases to influence them makes matters worse not better). I can imagine the subjects saying to themselves "as a feminist I would love to choose the female candidate. But what can I do - the male is just obviously so much better qualified". These subjects satisfy your WCT but I think they ought to be excused because they are not conscious of the fact that gives to their choice its moral significance: that it is sexist.

Gregg, Neil, could you tell me, do you see the consciousness thesis as an epistemic constraint on moral responsibility/desert, or is there something more to it? Something about the nature of human cognition, perhaps?

Also, I'm not terribly familiar with the real self theories, but the way that you answer challenges seems to me to leave an opening for a real self theory to reject the CT. For example:

//I find it hard to say that the implicit sexism is a reflection of the agent's real self since they are unaware of it and are unable to evaluated it against their other personal-level attitudes. (This is assuming, of course, that the agent does not *also* endorse explicit sexism in hiring practices.//

It's possible (in theory at least) that someone *does not* endorse their own sexism (being unaware of it), but *would* do so if it were pointed out to them. Would it not be possible for the real self theorist to hold that under such circumstances the sexism does reflect the real self, even tho' the real self is unaware of it?

Mark, great questions! My intuition is that you can view the consciousness thesis as an epistemic constrainst on moral responsibility, but I will let Neil weigh in here since it's his theory.

As for your second question, I would be inclined to say that in such a situation the sexism would be a reflection of the agents real self. I'm not sure, though, that that would be giving up too much since most of us reading this probably have implicit biases but would not consciously endorse those biases if we were aware of them. Maybe Neil wants to add something here.

Generalization of a case:

(i) Some factor x empirically, reliably causally influences a conscious state y that is also unaware of x.
(ii) Conscious state y is assessed as causally responsible for effect z that is demonstrably of some moral character.
(iii) y is unaware of factor x, and is unable to take x into account in producing z.
(iv) The ability to take x into account in producing z is a necessary condition for being morally responsible for z.
(v) Therefore y is not morally responsible for z.

In (iii) and (iv) does lack of awareness of x entail inability to produce anything other than z? I'd argue that if it does, then y is not responsible for z for something like incompatibilist reasons. But if that's false, then y may be responsible for z for reasons unrelated to x or the awareness of x.

When Cal has wine with dinner at a restaurant, he reliably tips more for the same level of service that he tips less for at lunches when he does not drink alcohol. This pattern is reliably established by an astute psychologist--Sal--he always dines with. Sal in fact has conducted empirical studies that show via neural imaging and behavioral correlates with many subjects that Cal tips more when tipsy because the alcohol suppresses critical evaluative skills that otherwise would be present.

Is Cal responsible for being generous with tipping at dinner as opposed to being fairer with tips at lunch?

If the alcohol causes Cal's reduced critical skills and over-tipping, then Cal isn't responsible for his generosity (either as praiseworthy on the part of the wait-person, or blameworthy by his accountant for extravagance) even though he is unaware of his being under its influence in this regard. And if one grills Cal afterward why the tip was so large, Cal can provide a number of reasons why the meal and service were exceptional in any given case--Cal is deft in making justifications (it is a nice restaurant; one person's confabulations may be another's justifications)--and Cal identifies with making these judgments and stands by them even later as sober. Therefore one could argue (and Cal does; others agree in his assessments) that Cal did have the ability to justify his actions after all, and thus made them freely and responsibly despite the fact that there are some objective reasons for some (Sal) to think otherwise.

Maybe such considerations not only separate some Type-IIIs as possibly responsible but show that epistemic conditions of responsibility might trump more metaphysical/incompatibilist-friendly ones.

Anyway, Gregg, it's whether any given x (above) produces inability that makes me wonder if incompatibilist values are involved in thinking that in the case of being held not responsible.

Y'know, you're hitting this guest author thing outta the park.

Mark, other people have described the consciousness thesis as an epistemic condition. I don't think of it like that myself, but I'm not sure what's at stake here. I would prefer your second option. It is intended to be true for us, given our cognitive architecture, where that architecture is a contingent feature of human lives.

I've talk about cases like the one you mention, not so much in the book as in papers. I'm not confident that my response is fully satisfactory. I distinguished between action (as you say) reflecting an agent's real self and its expressing the real self. I claimed that the difference was a kind of casual deviance, so only expression grounds moral responsibility.

Alan, while I want to disagree with your analysis I think you may have hit upon something important in my disagreement with Neil. Neil would probably agree with you about the Cal example. He would say that Cal should be held morally responsibility *because* he satisfies the consciousness thesis. That is why Neil disagrees with me about type-III cases. (Neil can correct me if I am wrong.) Perhaps I have just failed to persuade people about type-III cases but maybe it isn't an all or nothing thing. Some situational cases will be ones in which the agent satisfies the consciousness thesis and perhaps there are others in which the agent does not. Maybe your Cal case is one in which he satisfies both real self and control-based conditions. I still think there are other type-III cases in which agents fail to do so, and fail to do so because of the consciousness thesis.

As for whether I am assuming incompatibilism in these cases, I would like to think that I am not. Neil proposed perhaps a similar explanation for my intuitions about type-III case at the APA. He suggested that my intuitions were being driven by concerns over *Luck* and not consciousness. Whether I find a dime in a pay phone may determine whether I stop and help someone who drops their papers in front of me, whether classical music is playing at the liquor store or top-forty can influence how much I spend on a bottle of wine, etc. It seems that luck play a big role in what we end up doing. Again, I would like to think that my argument for type-III cases was based purely on the consciousness thesis but it's possible that I am smuggling in concerns about luck as well (unconsciously of course ;). I'll need to think seriously about that possibility.

Now that we hit 50 comments on this first post, I think it's time that I move on to my next post. Thanks everyone for the helpful feedback. You have given me plenty to think about!

The comments to this entry are closed.

Books about Agency


3QD Prize 2014: Marcus Arvan