D. Justin Coates

Work by Coates

« The Consequence Argument: Exciting News | Main | Fischer on Immortality at Big Questions »



Feed You can follow this conversation by subscribing to the comment feed for this post.


Perhaps one reason why you don't hear a lot in the free will literature about consciousness may be that given that nearly everyone in the debate presumably thinks that creatures who have no consciousness cannot be free and responsible, consciousness won't help discriminate between compatibilists and incompatibilists. As a rule, we tend to focus on that which separates us rather than that which unites us. That's why so much action has focused on the the compatibility question, the conditional vs. unconditional ability to do otherwise, etc. That said, I take it lots of attention has been paid to consciousness in the free will literature to the extent that conscious control has played a part in the epiphenomenalism debate. It's why people like Wegner deny we have free will--namely, we purportedly don't have conscious control. Is your question why we don't pry these two elements apart--namely, control and consciousness--rather than simply lumping them together? Here again, I suspect it's because the control condition highlights where compatibilists and incompatibilists disagree, while the consciousness condition isn't very helpful for these purposes. But I am just speculating.

p.s. I was speaking with a colleague recently. He works in Buddhist philosophy, phenomenology, and the philosophy of mind. He made a claim I had never heard before--namely, that phenomenal consciousness requires indeterminism in such a way that the existence of phenomenal consciousness precludes determinism. I had never heard such a claim before. Furthermore, because he thinks we can't meaningfully deny that we're phenomenally conscious, this fact alone suffices for establishing indeterminism. Needless to say, I was not convinced. Indeed, it seemed like a reductio to me! But perhaps there is some work on this front in the salient literature that might be worth checking out.

Hi Thomas,

Good. I think you’re at least partially right. If one’s aim is to beat back the advances of one’s opponent, and one need not mention some issue in order to do that, there’s certainly no motivation to mention it. However, if one’s aim is to give an account of something like free action, and some topic is obviously relevant to free action, it seems like an omission to ignore the topic.

Now, I don’t think RR- or DS-theorists have been *remiss* in neglecting consciousness thus far, partially because it has been safe to assume that consciousness is important in some way, and whatever way that is will not be diagnostic of differences between various compatibilist theories (and perhaps even between various incompatibilist theories, though you do see more consideration of consciousness from incompatibilists like O’Connor and from people who think consciousness of deciding gives us evidence for incompatibilism). And you’re right that there has been a recent flurry of work that includes talk about consciousness in light of recent cognitive science (Eddy Nahmias, Al Mele, Tim Bayne, Neil Levy, Gregg Caruso, Peter Tse, etc.). But very little in this work considers explicitly how a RR- or DS-view might be extended in light of the manifest intuitive importance of consciousness for FW/MR. Motivating consideration of how these extensions might go – and motivation of the further interaction between philosophy of mind/cog sci with philosophy of action that such consideration would require – was a main aim of the forthcoming paper to which I linked above.

Also, that is the spirit in which I offered the second and third set of questions in the original post.

Hi Josh,

I'm a bit puzzled by the setup. I would have thought that, despite not much explicit engagement, at least most of the deep self type views have an implicit requirement of consciousness found in conscious propositional attitudes.

Consider the sort of cases that motivate DS views. Actions involving endorsed attitudes are more our own than those that involve unendorsed attitudes. Some attitudes seem my own, others seem put upon me. 'Endorsement' (or whatever) helps mark those that are properly belonging to my real self (presumably, as opposed to my unreal self). But the plausibility of such thoughts surely turns on those endorsements, whatever they are, being conscious. Wouldn't an unconsciously endorsed attitude seem just as 'alien' as those that I consciously reject?

Regardless of the merits of such views, isn't it natural to suppose that real self theorists have an underlying implicit commitment to consciousness being an important marker for what belongs to the real self?

So, perhaps they've just taken the connection to be obvious, but this wouldn't imply that they think it isn't there.


I'm a bit bumfuzzled. Because there is no incompatibility between reasons-responsiveness approaches to moral responsibility and acting from unconscious or sub-conscious mechanisms. Mark Ravizza and I do discuss this point in our book, Responsibility and Control: A Theory of Moral Responsibility, where we argue that one can be acting from reasons-responsive mechanisms in contexts where one is not focussing on the reasons, and so forth.

I will check out your paper, but in advance I don't see why r-r theorists cannot accommodate the phenomena; and I agree with you that these phenomena should be accommodated, as we should be morally responsible for acting from unconscoius or subconscious mechanisms.

Hi Matt,

Interesting. A few things to say:

1] At least as Levy has it, DS theorists actually downplay the role of consciousness. Levy pits his view against a number of recent DS proponents (Sher, Arpaly, Smith), who he interprets as maintaining that one can be directly responsible for an action even while failing to be conscious of the moral significance of the action or of the facts that made the action right or wrong. Perhaps, though, other DS theorists have more or less assumed conscious endorsements of attitudes are necessary or at least critical for MR action. On that:

2] Certainly one thing a DS theorist can say is that the endorsements that baptize an attitude into the DS must be conscious. However, if this is the way they want to accommodate the importance of consciousness, I’m not buying. Why would a moment of conscious endorsement, which can be fleeting, be all that important? It seems better to say that the epistemic and motivational attitudes that drive an episode of conscious endorsement are really the important thing here for moral responsibility. And if that is the case, it seems possible for these attitudes to drive an action even though conscious processes play very little role in the production of that action.

3] You say: ‘Actions involving endorsed attitudes are more our own than those that involve unendorsed attitudes. Some attitudes seem my own, others seem put upon me. 'Endorsement' (or whatever) helps mark those that are properly belonging to my real self (presumably, as opposed to my unreal self). But the plausibility of such thoughts surely turns on those endorsements, whatever they are, being conscious. Wouldn't an unconsciously endorsed attitude seem just as 'alien' as those that I consciously reject?’

That’s interesting. Maybe you don’t want to chase this thought down, but it looks like here you’re appealing in part to the phenomenology of an attitude at two places. First, to think about alienation in the first place – some seem like mine, others not so much. Second, to offer a kind of justification for the importance of endorsement (I get that you are not yourself endorsing this view here, just spelling out how a DS theorist might think of things): without some phenomenology attached to an attitude of endorsement, an agent is alienated from that attitude. Yeah, maybe that is plausible. Do you think there is something about an agent’s conscious mental life that can either legitimately confer alienation or legitimately remove alienation?

Hi John,

Is bumfuzzlement worse than puzzlement? Either way, maybe this helps: the phenomena to which I’m referring is the high level of importance people place on consciousness, and the fact that they typically deny that actions produced by nonconscious mechanisms are freely or responsibly performed (your comment suggests you were thinking about it the opposite way).

At any rate, I think a RR-theorist has plenty of moves available (I discuss a couple in section 5 of the linked paper), and mainly what I want to do here is to see what moves RR-folks or their opponents find attractive, and why. That includes moves that emphasise the importance of nonconscious processes for MR action (I'll check what you and Ravizza say on this), as opposed to or in addition to the purported importance of conscious processes. So any further thoughts you have on this issue would be appreciated.

Matt: Your point certainly applies to traditional Frankfurtian or Watsonian Real Self views (which are the two kinds that Susan Wolf focuses on). These ‘endorsement-based’ views are going to require consciousness of action-relevant attitudes. I don't think, however, that *all* deep self theories need to be built along these lines.

Josh: Levy uses the term “attributionist” to describe a family of views that, among other things, reject a control requirement for MR. I think some think this family is synonymous with deep self views (maybe Levy does?, I really don’t know.). I don’t think the attributionists are all DS theorists, however, and indeed most are not. In particular, I don’t think Smith (along with Scanlon) and Arpaly should be called DS theorists b/c they don’t offer a criteria that demarcates attitudes into a deep type (those that belong to you in the distinctive way required for MR) versus surface type. This is an essential feature of a DS view if any is. Sher on the other hand is a DS theorist, though his purely ‘characterological’ approach to the DS, and purely causal approach for when an attitude expresses your DS, is going to be tough to swallow for many. Overall, I think Levy’s “attributionists” turn out to be a motley crew with pretty varied approaches to MR.

Josh, you ask, "What are the relevant forms of consciousness at issue: phenomenal, various non-phenomenal notions (like self-awareness or some kind of accesssibility relation)?"

About the sort of consciousness that some think is required for freedom and responsibility, it's perhaps worth noting that phenomenal consciousness (having experiences) pretty much goes along with access consciousness, what Neil Levy thinks should primarily concern us in this debate. In his Philosophy TV discussion with Gregg Caruso, Neil says he could pretty much dispense with talk of consciousness and just talk of working memory and associated capacities. But none of this normally happens without having associated experiences. Indeed, we wouldn't be using the term consciousness at all were it not for the phenomenal. We'd just talk about various cognitive functions and capacities, grouped by their roles and modes of integration. But as it stands, experience seems a good subjective indicator of the engagement of the associated brain processes - call them conscious processes - that the empirical data suggest are involved in lots of higher level functions, including learning, memory, and reportability.

Since those functions are generally necessary for the sorts of voluntary behavior that are subject to, and responsive to, moral evaluation, we usually only hold those responsible who act under the control of conscious processes. But this leaves open the question of whether experiences, as possibly distinct from those processes, play their own proprietary role in behavior control. Even if they don't, we still need to hold each other responsible.

I've got a chapter on consciousness in Gregg's book, Exploring the Illusion of Free Will and Moral Responsibility, that gets into some of this, http://www.naturalism.org/Experautonomy.htm

Chandra, Levy has dropped the term 'attributionist' (he uses it in the new book only to discuss responses to the 2005 paper). He calls Smith a DS theorist, but that's because that's how she described her own view in her 2008 view. Nothing turns on terminology here, though: it is clear that whether or not she is right to classify herself as belonging to the family of views that Wold called DS theories, her view is different. Here's a classification which Levy finds illuminating: DS views, quality of will views (in this category he places Arpaly and Scanlon) and Sher. He denies that Sher has a depth requirement.

He is puzzled as to why you think that consciousness is required by Frankfurt or Watson. Why should a second order volition or an endorsing attitude be conscious?

A small clarification on my post. Chandra is right to note that DS views *need* not be built with a consciousness requirement. I didn't mean to imply that they did. What I was trying to show is that consciousness seems implied by the ways in which *some* DS theorists motivate their views. The appealing thought of DS approaches, to me, is that some attitudes seem more my own then others. And that unconscious attitudes are excellent candidates for the latter.

Neil is right to note that endorsing attitudes need not be conscious in order to do the endorsing work (whatever that is). But if the conscious/unconscious dividing line isn't at least informative of endorsement (or whatever), then I think we've lost a major motivation for being drawn to DS views. So much the worse, perhaps, for such views.

In response to Josh's question, I'm not sure how to best think about alienation. It seems to me a natural enough thought. But not being a DS theorist, I'm not overly taken with handling alienation. Partly, the problem may just be epistemic. Of the victim of alien hand syndrome, how are we to know whether the seemingly intentional movements of the hand reflect endorsed or unendorsed desires? (A vivid, if uncommon, example.)

Tom, I am agnostic on whether phenomenal consciousness goes along with my notion of consciousness (which is not quite access consciousness: Block defines things into the notion of access consciousness which should be matters for scientific investigation, not stipulation). The point of saying that we can leave talk about phenomenal consciousness is to avoid debating those people (including Block) who deny that they are coextensive. I am attracted some kind of representationalist story about phenomenal consciousness, according to which it is essentially informational, but I wanted to avoid the debate. Another reason to avoid it is that talk of qualia brings out the woo, even among philosophers (quantum nonsense, for instance).

More to the point of the OP: I am sceptical that the folk make these distinctions clearly enough to make it worthwhile to try to gather evidence for their caring about *phenomenal* consciousness. In response to Block, several philosophers have said that consciousness just means phenomenal consciousness. But for nearly 100 years, debates about consciousness were dominated by Freud, and Freud's conscious/unconscious distinction was clearly an informational distinction. If the folk can swallow that without ever worrying about whether Freud was really talking about consciousness, then I think we should think that some informational state is central to what they mean by 'consciousness'.

There's a lot to respond to here, which I'll do tomorrow (it's getting late here in the UK), but I feel compelled to say to Neil:

if you don't name a future paper on consciousness 'Bringing out the woo,' I'll blame you for it.


No, bumfuzzlement is just puzzlement. I just like the word, so I finally decided to use it.

Hi Neil, Glad we have a Levy scholar in this thread. :-) Some quick thoughts:

I asked Angie why she chose the DS label in her 2008 paper. I don’t think she has any strong commitment to that label and would be ok without it. I have some quibbles with your new taxonomy, but they are just that, quibbles. So maybe I can say more about Frankfurt and Watson, and in fact I’ll just stick with Frankfurt for space.

In Frankfurt’s early model of endorsement, this involved stepping back, looking over one’s set of first-order motives, reflectively criticizing them, and on this basis forming higher-order desires about which first-order motive should be effective in action. Agents might engage in multiple rounds of this higher-order reflection. In some versions, the agent decisively commits to one among the competing motives. It is hard to see how all this agentially demanding stuff gets done without conscious activity getting into the mix. For example, the characteristic accompaniments of consciousness (working memory, effortful directed attention, system 2 processing) are plausibly going to be heavily involved. (King and Caruthers have a nice discussion of these and related ideas, but I know you already know that paper well.)

Here is another related thought: Frankfurtian endorsement, and the reflective activity that accompanies it, is a paradigmatic process that *integrates* one’s disparate competing attitudes. Given your own commitment to the role of consciousness in agential integration (i.e., your paper in Nous), why wouldn’t you very much *want* to say that Frankfurtian endorsement requires conscious activity. What am I missing?

Josh, excellent work, and great questions.

As Tom said, there's a strong correlation between phenomenal consciousness and access consciousness. And as Neil wrote, the folk probably don't make the distinction. I'd like to roll these points plus more into a larger point on behalf of RR views (perhaps DS views can do something similar). Namely, Reasons Responsiveness = Kahneman's System 2 = consciousness, more or less. That is, in the real world these things cluster together so tightly that the folk are not going to bother distinguishing them. And quite reasonably not, for all practical purposes.

Note that, if folks are vaguely aware that some one thing (what cog sci calls System 2) is both reasons-responsive and conscious, this could partially explain their reluctance to regard it as conceivable that an unconscious robot can intentionally steal wallets, or generally behave just like a human. It's a bit like asking them to conceive a diamond that isn't hard.

Now in my view, propositional attitudes are doing the ultimate work and phenomenal consciousness is "just" "along for the ride" in exactly the same way that the internal combustion engine in your car is along for the ride, i.e, not really. That is, it's possible to build a car without an internal combustion engine, in which you can ride. But *your* car ain't going nowhere without an internal combustion engine.


What you say about Frankfurtian endorsement nicely illustrates the difference that I have been trying to bring out in recent work between autonomy and responsibility. They are different notions; the endorsement you describe seems more suited to autonomy than responsibility (in my view).

There seems to be a implicit idea that MR relevant consciousness is an all-or-none phenomenon. Quite aside from accepted excuses ("I was distracted when I made that inappropriate decision"), I might suggest there are qualitative differences between individuals in what is accessible to consciousness (why we engage in consciousness raising exercises). One example might be awareness of implicit bias as a factor in one's decision making (eg the literature on racial bias in the police decision to shoot) - and how this might interact with traits such as suggestibility (eg Lifshitz et al 2012 http://www.ncbi.nlm.nih.gov/pubmed/23040173) that vary between individuals.

Chandra, I agree that *as a matter of fact* consciousness is required to play these roles. The question I thought that Matt was addressing was whether one would think consciousness is required without looking at the data.

I wouldn't appeal to Levy's 2014 Nous paper. Like his 2005 paper, it's wrong in important respects, as I show in a paper forthcoming in the same journal. Given his track record, probably you're best off just ignoring him.

Hi Tom,

Thanks for the link – in fact I’ve read (and enjoyed) your paper. There’s a lot going on in your post. Two things stand out (to me). First, you suggest what might be taken as an error theory for folk views on the consciousness-MR connection. Roughly, consciousness is a decent subjective indicator of processes necessary for control. So we usually only hold those responsible who act under the control of conscious processes. But consciousness might not be necessary for control, so . . . the folk overgeneralize. Is that right?

Second, you seem to suggest a kind of pragmatic view on moral responsibility – even if we think that (phenomenal, you seem to suggest) consciousness is necessary for MR, and we find out there is no proprietary role for consciousness in control, we still need to hold people responsible. I think we certainly will continue to hold people responsible, but what do you mean by ‘need’? Society won’t function without some forms of punishment?

Hi Matt,

On the alien hand: yes, allowing some kind of endorsement without consciousness of the endorsement might raise an epistemic problem. I think, though, that most DS requirements raise an epistemic problem. It is usually quite difficult in practice to determine whether an agent has acted on a DS-attitude or not. It seems easier to determine whether the agent has acted from a RR-mechanism, since past behavior seems to be a reliable guide to this in many cases. I wonder whether a DS-theorist would want to appeal to behavior as well to help determine what attitudes are ‘deep.’

Hi Neil,

Yeah, I think you are probably right that the folk don’t carve the phenomenal/access distinction very cleanly. There are different ways to take this, I think. One is that the folk don’t agree to a phenomenal/access distinction: for the folk, (phenomenal) consciousness is shot through with functionality, and philosophers – bewitched by zombie-style thought experiments – wrongly ignore this appearance of functionality in accounts of consciousness. (The appearance of functionality is different from the actual functions of consciousness – many philosophers offer proposals about the latter thing.)

One place at which we might disagree, though, is whether one can fully cover the consciousness-MR connection without delving a bit into the phenomenology. Maybe you agree, and would rather work on the bits that do not bring out the woo. I can certainly understand that. But in some places you seem to suggest that delving into the phenomenology is a fruitless endeavor. Or maybe I’m reading too much into comments made in interviews etc.?

Hi Chandra,

On conscious endorsement. Let me try out a line of argument for the philosophers of mind in the house. It looks like everything you are say requires consciousness could be done by a zombie. So, for human beings, all this stuff requires consciousness. But there are conceivable beings (and possible beings, if you buy the Chalmers line) who do all this without being phenomenally conscious. So, phenomenal consciousness is at least not conceptually necessary for MR. So the relevant form of consciousness is some explicitly functionalized notion. Yes?

Hi Paul,

Good. So, human agents need consciousness. But sophisticated but physically different agents (robots, aliens) probably don’t – reason-responsive mechanisms are multiply realizable, and some of those realizations don’t require anything like human-style consciousness. Is that the view?

Hi David,

Thanks for the link, and the interesting suggestion. You’re right that there is a tendency to go all-or-none with consciousness, and you’re also right that there is a way of thinking about these issues that admits of degrees. This comes up, I believe, in the literature on the difference between Persistent Vegetative and Minimally Conscious State. One question I have when thinking of things in this way is whether higher degrees of awareness bring with them higher degrees of control over behavior, and if so, why.

Hi Josh,

I see no principled reason why a DS theorist couldn't appeal to behavior. After all, "actions speak louder than words". And a behavioral history that evinces certain attitudes or commitments, might indicate they are stably held, which one could work into an account of the deep self. (Angie uses examples of this sort to good effect.)

As to your question about degrees of awareness and degrees of control: isn't it plausible to suppose that being unaware of something would hinder my ability to control it? If I'm unaware of important features of my circumstances, it seems I might have a diminished capacity to exercise control regarding, as it were, the world around me. Since what I do is partially dependent on the world around me, exercising control effectively can be partially a matter of how much of the world I'm aware of.

The comments to this entry are closed.

New Journal: Ergo

Blog Coordinator