Blog Coordinator

« Plan for the Month | Main | X-phi survey »

03/02/2015

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Brilliant analysis, Gregg Caruso. I'm forced to agree. I resist the idea of global exculpation, but it does seem like there's far less local moral responsibility than is standardly thought. I've always thought this was the case, not just because I agree in large part with the consciousness thesis, but also because I think there are other necessary conditions that are often unsatisfied as well.

Hi Gregg, great post. I reject the consciousness thesis, and although I am still constructing my position on the issue, I'll take a stab at explaining why.

As Angie Smith notes, “Moral criticism addresses a demand to its target, calling upon the agent to explain or justify rational activity in some area, and to acknowledge fault if such a justification cannot be provided” (2007:381) That is, moral responsibility (and legal responsibility, I think), is at least partially about communicating a demand to the person held responsible that they behave a certain way, and that they justify their behavior when they don't behave a certain way. From this perspective, it makes sense to hold agents responsible for things under their rational control.

One might think it thus doesn't make sense to hold agents responsible for your type II and type III cases: Levy says implicit biases are very difficult if not impossible to correct; and the impact of situational factors are often completely unknown to agents. But I've been thinking about this from a legal perspective, and I think there are cases where it might be right to hold persons responsible for things like implicit biases and situational factors. (Also for lapses, but you don't mention these in your post.)

Imagine a case where an employer has to decide between two equally well qualified candidates. One is white and another is black, and because he cannot come up with an objective criteria for distinguishing between them, the employer decides to "go with his gut" (this example is adopted from Shin 2002). The employer feels warm and fuzzy about the white candidate, and mildly uncomfortable when he thinks of the black candidate, so he hires the white one. He was never conscious of the moral significance of his decision to "go with his gut," or his choice of the white candidate. He just thinks he like the white guy more.

I think we ought to hold the employer responsible for his decision, because it does indeed express his truly held attitudes, but also because (1) his implicit beliefs about race are rationally modifiable (okay, this can be tough, but it can be done, and people who hire others ought to try), and (2) by holding him responsible, we express to him (and others) that he ought not make important hiring decisions the way he did. If we don't hold people like this employer responsible, aren't we inviting people to remain racist (in a lazy, passive way)?

Type III cases are harder. Discrimination laws can be a way of putting employers on notice that they ought to know they may have biased implicit beliefs that can impact hiring. But can we expect the average person to take responsibility for situational factors that impact their decision-making? Maybe! Once those judges you mention are made aware that they make worse decisions when they are hungry, they ought to eat before they sit on a parole panel. What if a judge hears about the study, but then ignores it thinking it is mumbo jumbo, and routinely forgets to eat before panels? At the time she is making a parole decision, she is unaware of the moral significance of the hunger or the impact on her decision-making. But she should be, and when we hold her responsible, we are demanding that she pay attention to this situational factor, and asking that she justify not taking it seriously once she became aware of it.

It may be that on my view there is a requirement that the employer and the judge be consciously aware of the moral significance of racist biases and hungry decisions at some point -- but not that they be conscious of this significance at the time their decision-making causes harm, and not with regard to this harm in particular. And maybe this caveat (of conscious awareness at some point) is enough for indirect responsibility on Levy's view. But that isn't enough for me. I want to hold the employer and the judge directly responsible, in a way that would support attribution of mens rea.

Does any of that make sense?

Katrina,

I - and I think Gregg too - take attributions of moral responsibility to be a metaphysical question, in part. Appropriate attributions of moral responsibility have truth conditions that are independent of what we believe or want (except when believing or wanting brings about some non-mental states of affairs). It might be true that failing to hold someone morally responsible would entail failing to express something or would invite laziness - but it is hard to see how these facts could bring it about that someone *is* morally responsible. That you think it does suggests that the dispute is partly verbal - you're using 'morally responsible' in a different way. I take my conception to be the one Derk Pereboom calls 'basic desert'. It is entirely backwards looking and pays no attention to consequences.

You say that implicit attitudes are 'truly held attitudes'. There's a sense in which that's obviously true, but it's an empirical question what kind of attitudes these are. I've argued recently that they're very different from beliefs; they lack propositional structure. I think there's a risk of implicitly substituting unconscious beliefs for these attitudes when we make moral judgments; I doubt that the kind of attitudes that they are can support our intuitions.

Rick, thank you for the kind words! While I know we disagree about global moral responsibility skepticism, I'm glad to see that we agree that there are internal challenges to moral responsibility that are worth taking seriously. As I see it, one could question the whole moral responsibility system—as Bruce Waller likes to call it. We can label such challenges "external challenges" since they question the system as a whole—i.e., the system of holding agents morally responsible for their actions in the basic desert sense. But one could also raise "internal challenges" to moral responsibility. These are challenges that arise internal to the moral responsibility system. While I push both challenges in my work, this post is just about a specific type of internal challenge based on the consciousness thesis. I’m glad to see that even if you reject global skepticism about MR, you acknowledge that MR may be less common than we think. I'm gladder still to see that you accept the consciousness thesis. Not much for me to disagree with ;)

Katrina, thank you for your comments. I take it that your intuitions will be shared by many. Let me take off my global skepticism hat for the moment and climb into the moral responsibility sandbox. Should we, as you suggest, hold the employer in your example morally responsible? Well, I think it will depend on the details of the case. In the way you describe the case, it may be that the employer has not only an implicit racial bias but also explicitly endorses that bias. I take it that that is what you mean when you say “I think we ought to hold the employer responsible for his decision, because it does indeed express his truly held attitudes.” In the case I was considering, the agent is unaware of the implicit bias and *if* he had been aware of the implicit sexism he would *not* have endorsed it—that is, he does not explicitly endorse sexism in hiring practices. That’s an important distinction and I think our cases may be different for that reason. Also, in the case I considered, the agent is conscious of what appears to be a reasonable criterion for choosing the candidate he does—the fact they they are streetwise or highly educated—and it is this confabulated criterion that is assessed against the agents other personal-level propositional attitudes. Would holding the agent morally responsible here (in the basic desert sense) bring about the desired effect you see such blame playing? In the communicative exchange the agent would provide you with what would appear to be sound reasons for his choice. How would blame here help him correct his implicit bias (which he is not aware of and would not consciously endorse if he were)?

It’s interesting that you think type-III cases are harder (since Neil and some others think these are the easy cases :). I agree with you that once we become aware of situational factors we may be able to take more active steps to avoid or neutralize their influence. I am not convinced, however, that such a reply fully satisfies the concerns I am raising. I’m also not sure we can practically avoid all situational influences, hence the question remains if agents should be held morally responsible in the cases where they are not aware of these effects and they play the role I outlined above. As for the Judges being sure to eat before sitting on the parole board, how does that help the guy that goes third or forth in the schedule? All I know is, if I every go before a parole board I really want to get the first slot (maybe I will also bring the judges a bunch of bananas)

Hi Neil

I agree that we may be using the phrase moral responsibility differently. Because I am interested in holding people responsible, and in social criticism and punishment, I think responsibility is partially defined by its function, which has a forward-looking component. I think our standard responsibility practices don't just aim to identify certain types of decisions/acts, but also aim to influence decisions/acts. If this is true, it makes sense to hold people responsible for a wider range of mental states than those we are consciously aware of (in your sense).

If implicit attitudes can be rationally managed, this makes them a good candidate for responsibility in my view, regardless of whether they are propositional. (Again, however, there needs to be a sense in which an agent knows they have a duty to manage their implicit states.)

And Neil, thank you for your great book. I really learned a lot from it.

It seems to me that all these cases rely on the claim that the facts that these actors are unaware of are morally significant(*). In the Kenneth Parks case (as portrayed) that seems very plausible. It's somewhat less plausible for the type-II and type-III cases.

For example, is it really morally significant to this actor that the presence of the briefcase on the table has an effect on the average actor's behaviour? The claim "You can't blame him for being selfish; there was a briefcase on the table" seems to me to be in need of some further support. What has been offered -- that people are more likely to act selfishly in the presence of briefcases -- doesn't seem to me to be sufficient support for the claim. There's no way to know whether *this* actor's behaviour has been changed the presence of the briefcase. If the presence of the briefcase doesn't make any difference to *this* actor's behaviour, then ignorance of the general effect can't explain why this actor is not morally responsible.

And even if this actor is one of those who would act more selfishly in the presence of a briefcase, the claim that its presence is morally significant is still dubious. Consider this analogy: a man is more likely to rape a woman after he's had a few drinks. Suppose a man was unaware of that fact, and raped a woman after having a few drinks. Would that ignorance render him not morally responsible for his action? Or would it be more reasonable to say that whether he knows that about himself or not is morally irrelevant -- that if he is, in fact, not morally responsible for the rape, it is for some other reason? If the latter, then why would his ignorance about the briefcase any more relevant than his ignorance about his propensity to rape?

(*) In fact, given the statement of the consciousness thesis ("consciousness of some of the facts that give our actions their moral significance is a necessary condition for moral responsibility"), it would seem that these unrecognized facts are the *only* morally significant facts -- which is pretty obviously wrong. I'm proceeding on the assumption that the actual thesis is more like "knowledge of all the facts...."

Mark, thank you for your comments. Since you are willing to grant that we should excuse moral responsibility in the Kenneth Parks case, I will focus on type-II and type-III cases. Why is it less plausible to think that agents in these cases fail to be conscious of the morally significant facts that give their actions its moral valance (to use Neil's term)? (According to the consciousness thesis, when agents are morally blameworthy or praiseworthy for acting in a certain manner they must be conscious of certain facts which play an especially important role in explaining the *valence* of responsibility. Valence is defined in terms of moral significance: facts that make the action bad play this privileged role in explain why the responsibility is valenced negatively, whereas facts that make the action good play this role in explaining why the responsibility is valenced positively.) In the implicit bias case the agent is unaware of their implicit sexism, and it is the sexism that gives the action its moral significance. Do you not agree?

In the type-III case you mention (i.e., the briefcase example) I think we may be focusing on different features of the example. I am talking about cases where agents actually are situationally influenced. I understand that there may be epistemic challenges to knowing when in fact agents are affected in type-III ways but I am interested in the question of whether agents are morally responsible in cases when they actually are influenced in the ways I describe. Secondly, you frame your objection as a challenge to the following claim: "You can't blame him for being selfish; there was a briefcase on the table.” I wouldn't state the challenge in that way—I think the way you state it misidentifys the target. According to the consciousness thesis, you cannot blame him for being selfish because he was not conscious “that he was acting selfishly” (under this or a similar description). When he evaluates his behavior against his other personal-level attitudes, he assesses a confabulated set of reasons for action—the confabulation occurring because he remains unaware of the true cause of his action. The key point is that he does not see himself as acting selfishly (which I think is plausible to assume in such cases). Of course, a lot rides on the empirical details of these cases—and I am totally fine with that. I want to leave it as open empirical question the extent to which agents fails to satisfy the consciousness thesis.

Great topic Gregg.

I think we need to weaken the Consciousness Thesis as presented in the precis. I haven't read Neil's book (yet) so maybe I will be missing a point that was covered in the full treatment. The point is about indirectness. I think it suffices that the agent was conscious at some point, of the relevant facts. For example, take Katrina's version of the hungry judge example. This judge has heard of the research, and ought to know better. His failure to be conscious of his hunger right now, during or immediately after the judging, is largely beside the point. Perhaps he has even forgotten the research by now - but that itself would be problematic, rather than a good excuse.

Thus, I balk at some of the type 2 cases, as well as many of type 3. Despite accepting a (modified?) version of the Consciousness Thesis.

On a meta level, I want to stand alongside Alan White, or at least somewhere nearby, and say that practical considerations need to be more like the horse, and metaphysical FW and MR accounts need to follow behind like the cart. Only rather than call these considerations "pragmatic" I call them ethical, in a broad sense that goes beyond interpersonal relations and includes individual-centered value. For example, the value of deliberation and self-control is an important driver of how we understand free will. For moral responsibility, the fact that a set of practices promises to order our society fairly and well, or at least as fairly and well as any alternative, is vital to the justification of those practices. Please note that one need not be a consequentialist to accept that, nor need the responsibility rules themselves contain any forward-looking considerations.

Gregg--

Well, I now know what it's like to be an opening act for a Gaga or Beyonce! Great detailed and thoughtful post. (Kanye would no doubt blame me for mentioning Gaga.)

Couldn't agree more about the significance of Neil's book. I don't toss around the word "brilliant" often, but it's deserved there. (Deserved? Ouch!)

Everyone who endorses MR of some type must endorse one form of the consciousness thesis. That is, some party--whether attributive or attributed--must be conscious as a necessary condition for some process of holding individuals or agents responsible obtaining. An asteroid certainly isn't conscious, but neither are the victims of a planet smashed to bits by it--and even if those inhabitants would have tried to blame the rock for their demise, in fact no blame occurs in such an extinction event--even an attempt at undeserved blaming.

I do endorse the consciousness thesis for the blamable at least broadly. But I have doubts about desert-entailing blame, and in part for metaphysical reasons. I can't imagine any such so-called blameworthy beings satisfying anything like a detailed account of what constitutes the necessary conditions of that worthiness that also constitute sufficiency. But in part I can't imagine that because I don't see how metaphysical details can fix the appropriate necessary-condition values in features of neurophysiology, or vice-versa. It seems there will always be empirically-based Type I-III doubts about bypassing this or that factor of consciousness by processes unavailable to conscious access (in Neil's sense of access), and from the base-assumption of ultimacy or desert-based MR, skepticism is thus always fed a healthy diet.

Then why not take the lesson from such empirical studies as justifiably dropping skeptical attitudes about MR that thus are baseless? Then attributive blame might fix on some real but valuable psychological features of the attributed that can withstand more finely-grained desert-based skeptical doubts and based on seeing those doubts as inapplicable to the real world.

Overall I am arguing for the essential role of attribution of reasonable criteria of responsibility as at least as argumentatively important as any apportioned to the responsibility-attributed, and especially if desert-based criteria for the latter are seen as a sort of argumentative dead end.

Thank you, Gregg, for that helpful explanation. I misunderstood the role the experiments played in the argument, and I see now why my framing of the objection was not appropriate. I had mistakenly taken the argument to be that his ignorance of the causes behind his action were at issue, rather than his ignorance of the valence-making facts.

While this point is clear in Gregg's account, it's worth hammering home: the consciousness thesis - which Gregg didn't acknowledge he defended before I did, in his 2012 book - is an importantly empirical claim. It is therefore disanalogous to some of the claims about zombies. For instance, Siewert argues that, necessarily, a zombie cannot be in as good an epistemic condition with regard to some of its beliefs (eg, concerning its own mind) as a conscious agent may be. I am no qualia freak and doubt claims like that. Rather, the claim is this: given the way that the human mind happens to be set up - given that our cognitive architecture consists in a massively, though weakly, modular system - without the integrative role played by consciousness we fail to satify a necessary condition of moral responsibility. This importantly empirical feature is worth emphasising because it makes our intuitions unreliable. Our intuitions about the mind are probably dualist; in any case, there is no doubt that massive modularity is highly counterintuitive. We should expect our intuitions to be at variance with reality when they are prompted by implicit conceptions that are false.

Alan, don't make be blush! None of that is true (i.e., the bit about being an opening act). It reminds me of a Miles Davies story during his rock/fusion period. He was asked to open for some rock band (can’t remember which one). He was insulted that he was asked to be the opening act but he agreed to the deal nonetheless. Instead of opening, however, he would just show up so late each night that the rock band was forced to go on first, hence making him the headliner. Not sure how that applies here, other than to say that if we were touring you would most definitely be the headliner.

As for your comment, my coffee has not kicked in yet (and I’m looking after a sick six year old today) so I may be missing your deeper point. It sounds like this is one of those cases where one person’s modus ponens is another’s modus tollens. If I understand you correctly, you think preserving our moral responsibility practices are so important that it should lead us to reject the consciousness thesis (at least as applied in the skeptical fashion that I advocate). I guess I would say three things in response to that: First, it sounds like you agree with me that adopting the consciousness thesis would lead to a more skeptical conclusion than some think. I’m happy to agree with that point ;) That is largely what I was after in my post. Conceding that highlights the importance of the consciousness thesis and the need for defenders of real-self accounts and control-based accounts to clarify the role they see consciousness playing. I agree with Neil that both approaches are committed to something like the consciousness thesis regardless of what the claim.
Secondly, you say: “It seems there will always be empirically-based Type I-III doubts about bypassing this or that factor of consciousness by processes unavailable to conscious access (in Neil's sense of access), and from the base-assumption of ultimacy or desert-based MR, skepticism is thus always fed a healthy diet.” Okay, but it’s important, I think, to keep Neil’s thesis in mind and not extent it too far. Neil writes, “if the thesis is that agents must be conscious of *all* the mental states that shape their behavior, no one would ever be responsible for anything” (2014, 36). Rather than demanding conscious of all relevant mental states, Neil argues that when agents are morally blameworthy or praiseworthy for acting in a certain manner they must be conscious of certain facts which play an especially important role in explaining the valence of responsibility (see my comment above). While I would like the adoption of the consciousness thesis to entail global skepticism I do not think it does. Instead it places very specific empirical constraints on moral responsibility, and whether agents satisfy those constraints will be an empirical matter.

Thirdly, you conclude by saying: “Then why not take the lesson from such empirical studies as justifiably dropping skeptical attitudes about MR that thus are baseless?” I take it this is your modus tollens. (Correctly if I am misreading your point. Again, the coffee has not kicked in yet.) My reply here would be something along the lines of what Neil said in his comment above. We shouldn’t trust our intuitions about moral responsibility in cases like this. Rather we should consider the conditions that are required for moral responsibility and see if agents actually satisfy them. I think Neil makes a strong philosophical case for the consciousness thesis and I don’t see why we should reject for pragmatic or practical reasons. (By the way, I think we may also disagree on the pragmatic benefits of holding agents praiseworthy and blameworthy—but more on that toward the end of the month ;)

Paul, I think my reply to you would essentially be the same—i.e., regarding your “meta level point.”

Mark, glad I was able to clarify things.

Neil, thanks for driving that point home—I totally agree. Your book makes that point very salient.

Okay, I need to get back to tending to my little one. Sorry in advance if I am a bit slow in replying today.

Hi Gregg and Neil, thanks for bringing more attention to questions regarding the connection between consciousness and free will, which need more discussion. I'm not sure how significant the differences are between you regarding type 3 (situationist) cases. Here are two conditions I advanced in a paper "The Psychology of Free Will" written for an Oxford Handbook that, alas, never came out:

(CR) conscious reflection: agents have free will only if they have the capacity for conscious deliberation and intention-formation and that capacity has some influence on their actions.

(MR) motivation by (potentially) endorsed reasons: agents’ free will is diminished to the extent their actions are motivated by factors that they are both unaware of and would reject were they to consciously consider them.

(I was defining free will as the capacities for control that allow agents to be morally responsible, so to connect with your views, we can replace FW with moral responsibility--sorry my MR here is a confusing acronym.)

Am I wrong to think that both of you are accepting something like MR and then it's an empirical question to what extent our actions are motivated by (situational, etc.) factors of which we are unaware and would reject as reasons? But maybe my MR is too crude to capture nuances of your views that make them more different than I'm seeing.

Eddy, I am agnostic on MR - it doesn't play any role in my account, in any case. What concerns me is not what motivates the behaviour (conscious or not) but whether the agent is conscious of the facts about that action that entail its moral significance. The latter is needed because only then is the relevant information made available to the consuming systems that assess it for consistency with personal level attitudes. There may be a reliable correlation between failing to be conscious of the motivating factors and failing to be conscious of the facts that matter according to me, but I suspect they sometimes dissociate.

Gregg?

Eddy, thanks for the post! As to whether Neil and I agree about the threat of situationism, the answer (I think) is no. I see it as more of a threat than Neil does. Neil believes the consciousness thesis can be satisfied in situationist cases, while I think there is reason to think otherwise (at least in certain specifically defined cases). But we both agree, I think, that it is an empirical question. I will let Neil, however, explain his view himself.

Thanks for sharing your two conditions. They are very interesting. I would need to think further to see how they line up with what I have in mind.

As for your MR, I am tempering to accept it but I do think it differs slightly from Neil's thesis. According to your condition agents' free will is diminished to the extent their actions are motivated by factors that they are both unaware of and would reject were they to consciously consider them. I think for Neil, not all factors are relevant to the consciousness thesis, just those that give the action its moral valence. Secondly, for Neil consciousness is necessary because it plays an important integrative function. I'm note sure if you agree with that or not. I'll let Neil comment on any other differences. I think we are in the same ballpark though!

Can I ask you, how do you see your conditions constraining moral responsibility since you are the first compatibilist in these comments to acknowledge a consciousness condition of some kind? Do you think (empirically speaking) moral responsibility would be severely limited or would it only affect a narrow range of cases? I get the impression from you MR condition that it might be somewhere in between. I'm intersted to here your thoughts. (BTW, do you have a book coming out on these issues? I thought I remember hearing something about that.)

Gregg (and Neil), I think my remarks did get off the tracks on the consciousness-valance point, and I apologize. I moved too quickly toward valence-resetting, I think, and so sidestepped any real criticism of your point. No doubt some unconscious factor derailed me!

Gregg,

Very interesting post and discussion!

A side issue (which I think has some consequences in practice nonetheless).

On the question of type-2 cases, and specifically the "implicit sexism" example, while the data supports the conclusion that there is sexism in hiring in some cases, it does not seem to support a stronger conclusion that the hiring decision was in all or most cases the result of implicit bias. For example, at least based on the data available to me, it might be that 1/3 hiring decisions involved sexist bias, wheras 2/3 did not.

So, it seems to me that given the available data, one can't properly conclude in a specific case that a person who hired the male candidate certainly or even probably did so due to sexism, or is implicitly sexist. For example, it may be that a person picks the male candidate for police chief because he's streetwise, and the same person would have for the same reason picked the female candidate if she had been described as streetwise.

This is not directly about your main questions, but in context of your reply to Katrina, you ask about the potential effect of blame playing in the case you were considering. It seems to me that regardless of the effect, one can't properly assign to the event "he or she hired him due to implicit sexism" a sufficiently high probability to justify blaming, even assuming the person is actually guilty and that blaming would be justified on the basis of the implicit bias, if one could tell that there is such implicit bias.

That aside, I'm wondering about the justification for the reason they actually give, and whether they are responsible for that.
More precisely, are they justified in believing that streetwiseness is more or less important in a police chief than formal education, or making assessments in the first place?
Without any good evidence one way or another (i.e., they're not experts in police work, have no data to assess the matter), shouldn't they say that they do not know which candidate is more qualified and refrain from making a choice?
For example, let's say that Bob consistently gives more weigh to being streetwise and bases his assessments on that - regardless of gender -, but Alice does just the opposite - also, regardless of gender.
Might that count as a bias for or against formal education and/or work experience that they perhaps should be aware of?

Angra, thank you for the comments. I did not mean to suggest, nor do I think I suggested, that implicit bias is involved in all or most hiring decisions. So I agree with your first point (though I don't know about the percentages). As for the implicit sexism study I cited, there is good reason to conclude that implicit bias was at play. Participants favored the male candidate regardless of which group they were in, they just shifted their rationals for **why* they picked the male candidate. The literature on implicit bias is large and we know it exists, but I agree with you that in real-life circumstances its difficult to know how big a role it plays. Again, I'm open to modify my view based on future empirical work. Maybe type-II cases are common, maybe they are not.

With regard to your next point, I think I may be confused. You seem to suggest that *if* the agent were acting on an implicit bias they *would* be blameworthy, where I am saying the opposite. Would holding the agent morally responsible here (in the basic desert sense) be justified? I think not. And this is because the agent would fail to satisfy both the conditions of real self accounts of MR and control-based accounts of MR for the reasons outlined in my post. Plus, what good would blame do here? In the communicative exchange with the agent, he would provide you with what would appear to be sound reasons for his choice. How would blame here help him correct his implicit bias (which he is not aware of and would not consciously endorse if he were)?

I'm not sure what to say about your last point, but I don't see how it would affect the consciousness thesis either way.

Gregg,

Thanks for the reply, and sorry for any misunderstandings.
Regarding hiring decisions, I agree that you did not imply that. I was talking about all hiring decisions in the study in question, or in a situation in which we have the same amount of information as in the study in question. Sorry if that was not clear.

Now, you say "As for the implicit sexism study I cited, there is good reason to conclude that implicit bias was at play. Participants favored the male candidate regardless of which group they were in, they just shifted their rationals for **why* they picked the male candidate."

That wording gives me the impression that you're implying that either all or at least most participants who picked the male candidate were making the decision on the basis of implicit sexism.
But I reckon (in light of the context) that may well be a mistaken impression, so if you are *not* making any such suggestion or implying that, then there seems to be no disagreement here.
What I was trying to get at is that given a specific participant who favored the male candidate, there isn't enough info to conclude that they based or probably based their hiring decision (or their evaluations of how qualified a person is) on implicit sexism only knowing that a person picked the male applicant in the study, even though there is enough information to conclude that there is such bias in the group, as a trend - but if I'm getting any of your points wrong, please let me know.

With regard to the percentages, I didn't mean to suggest that the 1/3 and 2/3 percentage was particularly probable or otherwise privileged. I meant to use that as an example, in order to say that I couldn't rule it out based on the data available to me. But for that matter, I can't rule out 1/4 biased, or 1/2, etc. Maybe information about the number of people who responded in each manner in detail might help narrow that down; I'd have to see it, but it seems very improbable to me that it would justify an assessment of implicit bias (or very probable implicit bias) in the case of a specific participant.

Regarding the following point, I get that you're saying that they would not be blameworthy even if they act on implicit sexism (I'm not sure whether you believe they would be acting immorally/breaking their moral obligations, though). The point I was trying to make is that in my assessment, even under the assumption that the correct view is that those acting on implicit sexism in the hiring scenario (or similar ones) are blameworthy for that, it would still be improper to blame any of the specific participants for acting on implicit sexism, due to a lack of information about who actually was biased (i.e., I reckon the information we have would not be enough to tell which one of them was biased).

In re: the last point in my previous post, I was wondering whether in cases like those in the paper you cited, there is still blameworthiness, because even if the sexist bias is implicit, there is another unjustified criteria - the one that they actually publicly hold that they are following -, and they should have realized that their criteria was not justified. So, I wonder if [at least, many] cases of type-2 do not end up being cases in which the agent is blameworthy for their choice, given that the confabulated criterion (resulting from the implicit bias) is not justified, either, and they should know it. It wouldn't affect the consciousness thesis, but the question of when the conditions the thesis demands are met. But that point was a tentative suggestion only.

Angra, thanks for clarifying. I don't think we are that far apart. I agree that implicit bias studies can only identity a bias within a population. No disagreement there. (Neil has writing on implicit bias before so perhaps he has more to say on this issue than I do.)

Your next point is an interesting one--i.e., "even under the assumption that the correct view is that those acting on implicit sexism in the hiring scenario (or similar ones) are blameworthy for that, it would still be improper to blame any of the specific participants for acting on implicit sexism, due to a lack of information about who actually was biased (i.e., I reckon the information we have would not be enough to tell which one of them was biased)." I invite others to weigh in on this points. I'm not sure what I want to say yet. (Maybe once my coffee has kicked in and I have finished teach my three classes for the day I will have something more intelligent to say.) Quick question: Would accepting your view rule out all or most ascriptions of moral responsibility? It seems (give what you are saying) that there will always (often?) be epistemic doubt as to whether an agent is affected in a type-II or type-III way. Is that your point?

With regard to your last point, let me take a step back for a second. Neil has suggested that the importance and influence of implicit biases may be limited by certain circumstances (correct me if I wrong about this Neil). For example, implicit bias may be more salient when (say) two candidates are roughly equally qualified or when there are equally justifiable grounds for preferring one candidate over another. For example, if you are serving on a search committee for a TT philosophy position, how do you weigh the merits of three Phil Studies papers vs. one Phil Review paper. Perhaps implicit bias may play more a role in situations like this, ultimately nudging the decision one way or another. (Neil may want to say more on this point--especially if I am misrepresenting him!)

In terms of whether the confabulated criterion the participants appealed to where themselves justified, my feeling is that they were (I am not as skeptical as you are on this point). Being streetwise or well-educated seems to me to be legitimate qualifications to look for in a candidate. Remember, this is only a study and participating were *ask* to choose one candidate or another. I wouldn't read too much into that component of the study.

Gregg, as a moral responsibility skeptic, you don’t think *any* set of conscious capacities, operating in light of even the most complete knowledge of situational and unconscious factors, could make someone deserving of praise or punishment. But the niceties of the various cases you and Neil and others have discussed illuminate the conditions under which individuals could be fair targets of rewards and sanctions meant to change their behavior and perhaps deter and encourage others. We all agree that some sort of responsibility practices are necessary, even if people don’t deserve to be held responsible in the non-consequentialist sense.

The consciousness thesis, and all the barriers to conscious access to morally relevant information illustrated in the cases, helps to pick out a range of responsible agents, where responsibility is measured by the range of influences on their behavior of which agents are aware, and therefore what they could reasonably take into account in morally-freighted action, and therefore *should* take into account. The more we know about the influences on our behavior and the steps we can take to counteract them (having appropriately timed snacks in the courtroom), the more responsible we become: we can be justly rewarded or sanctioned for a wider range of actions and omissions as a way of getting us to behave better. The rewards and sanctions are fair because we're in a position to anticipate them, and so can adjust our behavior accordingly. And it gets worse: we’re all responsible now for keeping up with the literature on situationism!

What I always find interesting is that compatibilists appeal to these obviously forward-looking capacities as criteria for attributions of moral desert, where praise and punishment need serve no behavior-guiding function. What’s up with that?

Tom, thanks for your supportive and skeptic-friendly post! For the sake of this thread, I'm trying to bracket my MR skepticism to see if the consciousness thesis would create internal challenges for MR regardless of what one thinks of global skepticism. Everything else you said, though, fits with my personal way of viewing forward-looking moral responsibility. BTW, I'm interested to hear how compatibilists answer your final question (even if it was meant rhetorically.)

Try this on for size Tom (and Gregg):
We evolved in a social context, involving cooperation and potential cheating, that selected for reactive attitudes and behaviors (such as indignation, gratitude, forgiveness, punishment). The selective benefits were forward-looking. The attitudes and behaviors are backwards-looking in that they focus on agents' attitudes, intentions, and character when they acted and are not directed solely towards future consequences. Now, here we are with all these attitudes and the culturally developed practices that 'fill them out' in various ways, in many ways fixing some of their more blunt edges (such as more attention to what agents' conscious control). We might reason that they would be best refined if we gave up the backwards-looking features entirely and focused solely on the consequentialist features. But first, that might be mistaken, given the kinds of creatures we are (and it may lead to some less, rather than more, compassionate ways of treating each other). And second, it might be that the backwards-looking features are justified because of their role in our psychology (perhaps this move works best on certain meta-ethical views).

Nothing new here--PFStrawsonian moves which I think Tamler and maybe Manuel have made here or elsewhere. But I think it's the right sort of story, if told better than I just did.

Eddy, I know the story well. It's convinced at least one former free will skeptic (Tamler ;).

Instead of engaging with you on this point (something, perhaps, we can do later in the month), I would like to take off my skeptical hat again (damn you Tom for making me put it back on!) and redirect things back to the consciousness thesis. Do you think the reactive attitudes, even on the story above, can be checked/overridden/etc. in light of new considerations? For example, in your earlier post you suggested a different set of conditions for moral responsibility (MR in particular). It would seem that MR would excuse moral responsibility in at least some cases where ordinary folk still experience reactive attitudes. Are these reactive attitudes *justified* in such situations? How do you see your own views on the importance of consciousness playing out here? I'm really authentically interested. Since we are coming at this from different perspectives, and since you buy the story above, I'm wondering if you think (at least sometimes and in some ways) our reactive attitudes can be changed by philosophical argument? And if so, could that also be the case with philosophical arguments for skepticism?

Hi Gregg, yeah sorry for the distraction--Tom started it, so I don't deserve to be blamed! Briefly, I don't think I'm a full-out Strawsonian, but like him, I think our reactive attitudes and associated practices are internally revisable, and I think people are amenable to revising them on the basis of conditions like CR and MR--that is, people already think conscious and rational control are important for freedom and resentment, though they may not be as sensitive to these conditions as I think they should be. Yes, I think our reactive attitudes can be adjusted by philosophical (and scientific) considerations--I think Strawson does too. I'm even open to skepticism--for instance, if it were shown that our conscious, rational processes were always bypassed or just rationalizations (Strawson might even agree, but seems to assume that's not really a possibility). Here's a Strawson quotation that seems relevant: "Well, people often decide to do things, really intend to do what they do, know just what they’re doing in doing it; the reasons they think they have for doing what they do, often really are their reasons and not their rationalizations.... Nobody denies freedom in this sense, or these senses."

The comments to this entry are closed.

Books about Agency


3QD Prize 2014: Marcus Arvan