Blog Coordinator

« Journal of Practical Ethics | Main | Conference Call For Abstracts »



Feed You can follow this conversation by subscribing to the comment feed for this post.

Hi Josh, a couple of thoughts:

I don't find EXCUSE very plausible myself, despite being extremely sympathetic to the sort of view defended by Rosen. In Rosen's paper, he argues (if I remember correctly) that blameless ignorance of the fact that your action is wrong excuses you from blame, as in the Hittite slave owner who, owing to his cultural upbringing, has no idea that chattel slavery is wrong. I agree. The Hittite slave owner is not blameworthy for his bad behavior. But this, it seems to me, is because he is blamelessly ignorant of the fact THAT his action is wrong. Now, in your case, Zom knows THAT his action is wrong; he's just not sure WHY it's wrong. (Is that the correct understanding of the story?) The sort of ignorance displayed by Zom does not seem enough in and of itself to exculpate Zom.

Suppose I know (or at least have very good reason to believe) that it's wrong to discuss certain things with certain people, especially at the dinner table, but I can't seem to understand why it's wrong. Perhaps I know this because I have it on the authority of people I know to be morally virtuous and morally more sensitive than me. But suppose I bring up one of these forbidden topics anyway, just to get a kick out of seeing my guests turn green or squirm in their seats. Am I blameworthy? I should think so, and this despite my imperfect understanding of why my behavior is wrong.

Hi Justin,

Thanks for the comment. You are right that one way to criticize this argument is to argue that knowing that A-ing is wrong is sufficient for being blameworthy for A-ing, and that a failure to know why A-ing is wrong does not undermine knowledge that A-ing is wrong. Some of this, I think, turns on when and why it is legitimate to rely on moral testimony. Myself, I don’t have much use for moral knowledge via testimony (which is not to say we can’t get moral knowledge that way: if you know of a good paper on this, I’m all ears). If someone tells me that A-ing is wrong or right, I’ll think about whether A-ing is right or wrong, but I won’t take someone’s word for it. But then again, I’m not sure I know people more virtuous and more morally sensitive than me, even though I don’t regard myself as particularly virtuous or morally sensitive. I just don’t know how I would tell that someone is more virtuous/sensitive (I can tell, of course, whether someone is a better Utilitarian or Kantian than me, but I doubt either type of theory is all there is to morality). But back to the original thing: one way to push back in this case would be to say that Zom doesn’t really know that causing pain is wrong: knowing that requires knowing why it is wrong (insert something here about pain being intrinsically bad, and knowledge of this being crucial).

Whether I buy that claim depends on the case, I think. In the case of pain I am inclined to think knowledge of why it is wrong is required for knowledge that it is wrong. So this is not the reason why I reject the zombie knowledge argument. But I'm no moral epistemologist (maybe some readers including Justin could enlighten me on this).


You asked, “is Zom morally responsible for causing the dog pain”? I believe that morality and responsibility are two distinct and separate ideas. The entity that exerts the forces is responsible for the action, and therefore Zom is responsible for causing the dog’s pain. The morality aspect of that action is up to the observer – it’s not relative to an absolute. So while I may think it’s “wrong” for Zom to cause the dog pain, another entity may not.

So here’s where I think that takes us: If an agent lacks phenomenal consciousness and intentionally inflicts pain and suffering, said agent is responsible for his actions (and the legal system will hold him accountable), but it’s up to the observer to decide whether or not he’s excused from blame.


To be clear, I wouldn't say that knowing that A-ing is wrong suffices for being blameworthy for A-ing, though I am inclined to say that "freely" A-ing despite knowing that A-ing is wrong suffices, where "freely" designates the strongest sort of control required for blameworthiness.

Knowing why A-ing is wrong would seem to presuppose knowing that A-ing is wrong. But does knowing that A-ing is wrong require knowing why A-ing is wrong? Perhaps, simply because knowledge requires justification. But knowing that A-ing is wrong is not a requirement for blameworthiness for A-ing. So let's focus instead on "awareness" that A-ing is wrong, which some people think is a requirement for blameworthiness for A-ing. Might one be aware that A-ing is wrong and yet not know (or be aware of) why A-ing wrong?

Here's a case. Bob believes killing innocent people for no good reason is wrong, and, of course, he's right about this. However, Bob believes the sort of killing just described is wrong because it's against God's commands. But suppose there is no God and that the sort of killing is wrong for some other reason (utilitarian, Kantian, whatever) of which Bob is unaware. (Bob lives off the beaten path, and has never even heard of utilitarianism or Kantian moral theory.)

In this case, Bob is plausibly aware that killing is wrong, though he doesn't know why it's wrong. Might Zom be in a similar position? Might he be aware that kicking the dog is wrong without knowing exactly why? That's a tougher question, of course, but I'm inclined to say that the answer is "yes."

Josh (and Justin),

I wonder if we should be focusing on the distinction between knowing THAT A-ing is wrong and knowing WHY A-ing is wrong. This seems to me to be a red herring. I thought that the Watsonian view at issue in the post had more to do with the quality of will that one can display in acting than with one's epistemic access to the moral character of one's actions. The psychopath cannot but treat the consideration that A-ing is wrong as a conventional consideration, so he cannot display disregard for a moral consideration in acting against the consideration that A-ing is wrong. I thought that was (at least part of) Watson's point. (This reading is supported by the brief quotation given in the post.)

If this is correct, then it is not clear to me that EXCUSE is on the right track. The issue is whether one can recognize and respond to the force of the consideration that A-ing is wrong. But the principle, as stated, concerns non-culpable failure to understand why A-ing is wrong. First of all, non-culpable failure may not evidence inability. And secondly, it seems to me that the ability to recognize and respond to the force of this consideration is the more basic issue here, and that a failure to understand why A-ing is wrong is relevant, if it is, as a sign of this deeper inability.

Consider the case of the four year old. It seems that his non-culpable failure to understand why calling his uncle fat is wrong may be explained by his inability to recognize the force of certain kinds of consideration. For example, he may lack the ability to (reliably) adopt others' perspectives and empathize with them. So he may be unable to see that his comment will make his uncle feel bad. But this is precisely why it is wrong to call him fat. So the boy fails to understand why it is bad because he cannot recognize the relevant considerations. In this way, he is like the psychopath (as characterized by Watson). A deep deficit in empathy manifests in a kind of normative incompetence. It will also manifest in a failure of understanding. But the issue, relevant with respect to blame, understood in a certain way, is the fact that these agents cannot exhibit a particular quality of will. Their actions do not express disregard for certain considerations because they cannot recognize the force of these considerations.

Now, I am not sure how exactly to tie all of this into the main discussion in this post, about the relevance of phenomenal consciousness to moral responsibility. But this question seems along the right track to me: Is phenomenal consciousness required to recognize and respond to the force of the consideration that A-ing is wrong? If so, then it seems required for moral responsibility; if not, then it does not seem required. At first glance, it is not at all clear to me what the right answer is. In some moods, I am inclined to think that the functionalist cannot adequately account for the relevant ability. But in other moods, I cannot quite see what the functionalist's picture is lacking.

Hi James,

Thanks: yeah, I agree Zom is responsible in that first sense (something like causally responsible, right?). I’m not sure I agree with the second claim, that Zom’s moral responsibility is relative to the observer. At least, not if that means just any old observer. I like moral relativism, but that seems too relativistic for me.

Hi Justin,

Interesting. On the first thing: yeah, I was writing quickly, and failed to mention other background conditions on MR.

Why do you say ‘knowing that A-ing is wrong is not a requirement for blameworthiness for A-ing?’ I could see problems with that (e.g., maybe it is enough to believe it is wrong, and for it to be wrong), so it strikes me I was thinking/speaking too loosely earlier. I don’t know the blameworthy lit super well, though, so I’d like to hear your thinking here if you’ve the time.

On Bob and his awareness. I want to say (of that one case) Bob comes to know killing’s wrongness via the same kinds of ways we all do – i.e., perception, empathy, some basic socially mediated understanding of the gravity and consequences of taking a life. The same is MAYBE not true of Zom: he does not have available to him the same methods for understanding pain’s (intrinsic) badness. For that, he needs to experience pain.

I mean, there is a way to reject this argument that runs via rejection of the view that pain is intrinsically bad. Another way is to reject that pain’s badness is necessarily a moral issue – it might simply be a practically normative thing. But I was avoiding bringing those thoughts up right away.

Hi Ben,

Thanks! A few things.

1] A small thing. You said, ‘non-culpable failure may not evidence inability.’ Right. So suppose one has the ability to recognize the force of some moral consideration, but is not culpable for failing to exercise it and thereby to recognize the force. Are things different than if one lacks the ability?

2] On the inadequacy of Excuse. I’m happy to think through how this would go if we emphasize recognizing and responding to the force of moral considerations, as opposed to understanding why an action is wrong/blameworthy. Maybe this even gets closer to my original (shamefully vague) thoughts on the issues. How would you suggest modifying Excuse to capture this?

3] On the question you pose towards the end: good. I ultimately think the functionalist can get it done, but I know some people will disagree. And like you, I do find myself conflicted at times. Thus my desire to air the issue.

Thinking out loud, though: why would phenomenal consciousness be required for the relevant recognition and response? Well, it might be required for recognition if it is required for understanding why something is wrong/blameworthy, and that understanding is required for the recognition. Or, moving away from my original formulation, maybe it is required because some moral considerations (like pain's intrinsic badness) are essentially phenomenal, and can't be reached via non-phenomenal reflection, reliance on testimony, of what have you. Moral qualia, I guess. (I kind of shudder at the thought.)

Hi Josh,

Thanks for your reply.

In response to your first question, I think that there are interpretations of the point of blame on which there may be a significant difference between, on the one hand, being unable to recognize the force of a moral consideration against A-ing and, on the other, non-culpably failing to recognize the force of this consideration, though one could have recognized it. For example, if one thinks that one function of holding responsible is to induce or develop the ability to recognize and respond to moral considerations, then it would seem that blame might be appropriate in the second case, but not the first. In some instances, non-culpable failure could have been avoided, even though one is not on the hook for the failure; and one might think that some instances of blame would be justified by the aim of building up the ability to avoid such failures in the future. It is not clear that one could justify blame in the same way given the kinds of inability at issue. (I have in mind something like the view of Manuel "Being-Builder" Vargas here.)

There seem to me to be other interpretations of blame that would also distinguish between the two cases. For example, if one thought holding responsible was, in part, at least, aimed at communicating a basic demand for recognition and respect, then it would seem that this demand might be appropriately communicated only given the ability to recognize and respond to moral considerations, and even sometimes in cases where the blamee non-culpably failed to do so. (This is, perhaps, something like the view Watson has in mind. Others hold similar views as well.)

In short, though there may be ways in which the two cases are the same, there seem to me ways in which they are importantly different and so license different conclusions with respect to the appropriateness of judgments of blameworthiness and blaming responses.

With respect to your second question, I'm not sure how to modify EXCUSE. But a first step would, it seems to me, be to focus on the inability to recognize and respond to moral considerations, rather than the failures of understanding explicitly mentioned int he principle as originally presented. Sorry if that is not very helpful. I'll have to think about it some more.


Thanks for your reply.

In a certain sense, I think it’s fair to say that Zom’s moral responsibility isn’t relative to an observer, and instead it’s associated directly with Zom. In other words, I don’t have any objection to believing the rightness or wrongness of Zom’s actions are attributable directly to Zom. What I’m trying to get at, however, is something different. I’m trying to explain that the “rightness or wrongness” of an agent’s action is a judgment that’s made by some observer, and that judgment is what’s relative to said observer, it’s not relative to some absolute morality that’s defined by a centralized official entity.

For example, if Zom causes the dog pain, and one outside observer judges that action as wrong, then their judgment is associated directly with Zom – I don’t see any relativity issues there. It’s possible, however, that a different outside observer would judge that action of Zom’s as not wrong, and that’s where the “relativity” comes in. Different judgments are relative to different observers.

In order to utilize an absolute reference of morality for judging the actions of agents, (i.e., each individual no longer decides what they think is right and wrong, and instead, humanity refers to an absolute reference), we’d first need to develop a definition of what “absolute morality” is (e.g., doing what’s right/best for the overall ascent of life on Earth). After reaching agreement on that definition (which might take a couple thousand years), mankind would be faced with the daunting task of judging how any action performed by an agent affects the overall ascent of life. Since that analysis is much too difficult, we simply end up settling for the idea that morality is relative to the individual.

Individuals vote, and laws are passed thereby reflecting the moral views of the majority – I believe it’s generally a “good” system.

Perhaps I’ve explained my thoughts a little more clearly – but then again…

Channelling Dennett here... when we construct these thought experiments, we have to ensure that we really have built into them the right set of functional properties. Zom is a functional duplication of an ordinary person. So Zom: is disposed to flinch when something pricks his hand, disposed to withdraw his hand from hot plates, disposed to say "ouch" and maybe even cry if the noxious stimulus is powerful enough. He is disposed to experience autonomic system arousal to painful stimuli, maybe even to the thought of painful stimuli. Insofar as emotions can be functionalized - and most can to a great extent - he is even disposed to the same emotional responses as you and me. When we really imagine the case properly, I'm sceptical that Zom lacks the capacity to appreciate the facts that make pain wrong.

Hi Josh,

Late to the party, and I see that Neil has just raised my main worry here: I'm not so sure that the sense that Zom is excused survives a full functional duplication of an ordinary person. In particular, it seems that Zom's moral beliefs would be causally influenced by just the sort of things that influences the moral beliefs of an ordinary person. If we form beliefs about the badness of pain in response to being in pain or seeing people in pain, for example, one would think that Zom would form judgments about the badness of pain in response to being in the functional counterparts of such states and by seeing people in pain.

Because of this, I'm not sure exactly how best to understand what we are supposed to imagine. Perhaps you intend Zom to be functionally unlike ordinary people in some restricted ways, or perhaps the idea is that understanding has a purely qualitative aspect to it that eludes Zom. Or perhaps these are worries you share, and among your grounds for thinking the argument flawed.


I think Ginet accepts something like the following claim (R): a person is blameworthy for A-ing only if the person knew, or should have known, that A-ing was wrong. I don't know of anyone else who accepts (R), though. It seems rather strong. Suppose I'm aware that A-ing is wrong, but my awareness falls short of knowledge. Am I off the hook if I A? Seems too easy. Also, I'm in print arguing that a person can be blameworthy for A-ing even if A-ing was not wrong. If I'm right about that, then, a fortiori, a person can be blameworthy for A-ing even if the person didn't know it was wrong to A.

Ben and Justin:

That's very helpful in both cases. Thanks.

Hi James,

Thanks for explaining. I don’t have a major problem with moral relativism – the way some people lay it out (thinking here of David Wong and Gilbert Harman) I sometimes think I am one, myself. But still, I’d like a way to make sense of the thought that sometimes my moral judgments are wrong, and sometimes right, and so some moral standard exists. Moral relativism of various sorts can give us this – I was reacting to what looked like a kind of too-relativist relativism in your earlier comment.

Hi Neil and Gunnar,

Yeah, very good. This is one of my reasons for thinking the argument is in trouble. But look: here we are either denying the conceivability of zombies, or we are denying that knowledge of pain’s badness depends on knowledge of the phenomenal character of pain (in some way that grants the conceivability of zombies), or we are denying the possibility of zombies. The last option is what many physicalists do – but this leaves the explanatory gap, and thereby leaves room for arguments like this one (and the arguments from which it descends, like the Mary argument). The second option seems possible, but a bit odd. Both of you seem to be denying conceivability. Now, I admit I’m tempted by that line, and by any line that channels Dennett on these issues. But this would put us three in the minority of philosophers. I often find myself in such a place, so it’s comfortable in a way, but I also want other ways to reject this argument. I’m running off somewhere now, but in a bit I’ll offer what is probably my currently most preferred way of rejecting this argument. I was kind of hoping I’d find some endorsements of this argument amongst commenters! But then again once an argument is out there it’s way more fun to pick on it.

Josh: I don't see Neil or Gunnar denying the conceivability zombies. Rather, they are denying the other alternative (that an individual must phenomenally experience pain in order to understand its badness). Moreover, their denying this is not weird at all. Zombies are, by hypothesis, functionally identical to us. Since our brains functionally recognize the badness of pain, it follows that Zombie brains recognize the badness of pain. Simple as that.

Hi Marcus,

Yeah, maybe (Neil? Gunnar?). What do you - Marcus - think? Pain's badness does not depend on the phenomenal character of pain? One can get normative knowledge of pain's badness without having had a pain experience?

Hi Josh: That's simple. It all comes down to whether Zombies are possible. If they are (as I believe), then no, pain's badness in no way depends on phenomenology. Zombie brains classify/process pain as *bad*, just as ours do. Further, since Zombies are--by hypothesis--functionally identical to us, I don't see how your argument in the original post can get off the ground. Since I classify a dog's pain as bad, my Zombie twin will to, and so, will make all of the same moral decisions as I--and moreover, be functionally responsible for his actions just as much as I am. Your argument in your post depends--subtly but demonstratively--on the assumption that a zombie would *not* be functionally identical to me (since it wouldn't see pain as bad). But that's a contradiction. You've contradicted the initial supposition that Zombies are functionally identical to us.

Now, if Zombies *aren't* possible, then all bets are off. If physicalist is true, then maybe pain's badness is inseparable from its phenomenology--but in that case you can't appeal to Zombies in your argument.

Anyway, perhaps it doesn't matter. Why appeal to Zombies at all? Why not just ask the simpler question: would a being incapable of appreciating the badness of pain be morally responsible, or excused? Finally, however, I think this way of construing the argument (almost) reduces to the question of whether psychopaths are morally responsible, since psychopaths do not experience fear or pain like you or I do.

Hi Marcus,

Great stuff. A lot to say, but it’s getting late over here so I’ll try to get out a few of the most pressing reactions I have.

You say in one place that zombie brains classify/process pain as *bad*, just as ours do. One question the argument tries to raise is whether the type of badness at issue – the type accessible to a zombie – is enough to underlie the relevant understanding of the moral badness of causing pain. You are saying that it is relatively straightforward that the answer is yes. It’s not that simple to me. If zombies are possible, it could be that zombies lack a certain kind of knowledge. And it could be that this kind of knowledge is necessary for the moral knowledge that causing pain is bad.

Now, if epiphenomenalism is a bother here, one might follow Chalmers and argue that consciousness plays a role in constituting (rather than causing) certain phenomenal beliefs (ones that, given other conditions, amount to phenomenal knowledge). So I want to disagree that the argument above depends on the assumption that a zombie would not be functionally identical to me. HOWEVER: you said the argument demonstratively depends on that assumption. Would you mind saying more? Since I’d prefer to see the best version of the argument, I’d like to see where the assumption creeps in, and I’m not seeing it yet.

One thing maybe worth mentioning, but it is really just a reiteration of some of the above. You said the original argument above depends on an assumption that a zombie is functionally different from a human, *since* according to the argument a zombie wouldn’t see pain as bad. But I wasn’t claiming that the relevant sense of ‘seeing pain as bad’ had to be functional – it could be phenomenal, a la the Chalmers move above.

A small thing: you say a zombie will make the same moral decisions as a human, and will be functionally responsible just as much as a human is. I’m not sure what functional responsibility is – I’ve just not heard that term used.

Finally, you ask: why appeal to zombies at all? Mainly because I was trying to think of a way phenomenal consciousness would come out as necessary for at least some class of morally responsible actions. One way involves essential appeal to causation. I do consider that move in a paper-in-progress, but I didn’t discuss it here. There’s a stronger non-essentially-causal argument in the neighborhood – it involves appeal to some version of the phenomenal intentionality program, and in section 5 of the x-phi paper I linked to in my last post I suggest how it might go. But I wanted to throw this one out, mainly to tease out interesting discussion (in that connection, thanks for your comment!).

As Marcus says, I don't need to deny the conceivability of zombies (I wouldn't have any idea of what I was denying - I don't understand 'conceivability'. What are its success conditions?) Rather, the line I'm tempted by is to deny the possibility of zombies: I think that there's nothing to consciousness over and above a set of rich but functionalizable properties. At any rate, I don't give much credence to conceivability arguments. Think of Mary. She knows all this stuff we don't (since her knowledge of vision science is complete). Now I am supposed to accept that she can't imagine what it's like to see red. But how can I possibly have any kind of secure access to what she can imagine, in light of her vastly superior knowledge? I should expect that her imaginative resources are superior to mine.

Hi Josh: Really interesting points! I'm beginning to rethink my view in light of your reply, with an eye to my own (idiosyncratic) views on consciousness and free will. Indeed, I'm actually beginning to rethink my view on zombies...which, if don't change my mind back, would be a real that actually fits well with the story you're telling (If so, then thanks--I've learned something!). Allow me to explain.

It occurred to me, when thinking over your comment, that on my view zombies are only *almost* possible. On my view (which I develop in some of my papers), consciousness and free will are intrinsic, outside of the functional-physical order, but interacting with it (on my view, consciousness--in our world--plays a critical role in certain types of quantum collapse). If this is right, then "zombies" would be physically-functionally identical to us, but there would also be something that they would lack: the intrinsic consciousness/freedom that interacts with the physical. But in that case there would, contrary to the zombie hypothesis, be a subtle functional difference between "zombies" and you or I: irreducible conscious, libertarian freedom. And indeed, I want to say what you (seem to) want to say, which is that certain phenomenal *feelings*--e.g. of what pain feels like, etc.--would, in fact, play a critical role in moral deliberation that "zombies" wouldn't have access to.

So, crap, I guess I'm now agreeing with you. But only on the basis of my crazy metaphysics. Anyway, who said I'm stubborn? :P

Josh, here's a go at your really sharp question.

"Mainly because I was trying to think of a way phenomenal consciousness would come out as necessary for at least some class of morally responsible actions."

How about qualia-like (?) feelings associated with propositional content that nonetheless are (i) unexpressed either explicitly to others or even to oneself (ii) yet have propositionally-relevant moral properties that might have causal influence at least on one's ongoing moral character? Example: I congratulate a Flickers comrade on her brilliant publication. I consciously intend that remark to be congratulatory and even more deeply self-reflectively intend it to be (e.g.) "better-building" in Vargas' sense. But unknown to me, or known only in a penumbral "twinge" sense that I do not consciously acknowledge, I feel real jealousy, a jealousy that has manifested itself before in similar situations and over time has slowly molded my deep self unconsciously to adopt reactions to that repeated feeling. Over time that repeated feeling finally causally makes me snipe at someone for citing his own work as seminal for some discussion when I think that citation is self-serving, but I think that only because of my underlying jealousy. I would argue that (a) my sniping on the later occasion makes me a kind of asshole because (b) it was rationally unwarranted and caused by my unconscious twinges of feeling, but still could be evaluated by the larger moral community as rightly marking me an asshole, and partially because of the propositionally-relevant nature of those feelings. (I set aside deep issues of control of feelings here.) But could the purely phenomenal feeling of jealousy across time be considered as a necessary condition for justifying the public recognition that indeed I am an asshole? I am assuming that a zombie could exhibit propositionally-relevant content behavior, though not any behavior caused by associated purely qualitative feelings since they lack them.

I also recognize that even granted the moral necessity of the role of feelings here, their actual causal role might not be epistemically discernible given the assumption as being (demi-) unconscious, even if such purely phenomenal occurrences had actual causal influence on my overt behavior. But of course that is the metaphysical bane of purely phenomenal stuff assessed as conditions of moral significance, right, since causality of qualia sinks beneath the argumentative horizon here?

Hi Marcus,

If you do go on to change your mind, have we made philosophy blog history?

Thanks for bringing up your own view. One of my interests in the consciousness-free will connection has to do with what seems to me an enormous amount of uncovered theoretical space. So, it sounds like you’re saying that on your view phenomenal consciousness is critical (or necessary? not sure the strength of the connection on your view) for morally responsible action, because of a kind of intrinsic link between consciousness and free will, both of which are non-physical. Does that sound right? A further question I’d have (maybe this is in your free will paper?) concerns the connection between free action and consciousness. I could see a separation between some thing X that plays a role in quantum collapse and some thing Y that plays a role in the exercise of libertarian free will. But it sounds like you think there is a link – is the mechanism (or the power, or whatever) responsible for quantum collapse the same as that responsible for a free decision? And how does moral deliberation fit into this (the zombie knowledge argument above, after all, doesn’t say anything about libertarianism or its rivals)?

Hi Alan,

That’s very interesting. I hadn’t considered much beyond the case of pain, though I had wondered to myself whether a similar argument could be given on behalf of other experience-types. I like the example of a qualia-like twinge of jealousy. You ask: ‘But could the purely phenomenal feeling of jealousy across time be considered as a necessary condition for justifying the public recognition that indeed I am an asshole?’ I suppose that here I want to answer no – the phenomenal twinge of jealousy is not necessary. It seems too strong to say that, and the reason why I want to answer in that way is because I can’t find the same kind of connection between the jealousy and moral knowledge that my above argument asserts.

Now, this raises an issue with the above argument: isn’t its scope rather limited? I want to say yes, although this will depend in part on how much of morality you think depends on an understanding of pain’s badness (and probably on pleasure’s goodness).

You’re right about the problems with these kinds of arguments w/r/t causation. At heart I’m like Neil, and I have hopes for an a posteriori functionalization of consciousness, along with the assigning of qualia to the dustbin of history. But short of a successful functionalization, here we are with qualia dust thick in the air (on this post at least, which is my own fault).

The comments to this entry are closed.

Books about Agency

3QD Prize 2014: Marcus Arvan