Blog Coordinator

« Featured Authors on the Horizon | Main | On the Horizon at Brains »

05/18/2013

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

I think I should point out that the Mark Young who wrote the article mentioned above is not the same Mark Young who sometimes posts comments on this blog. Maybe you all already knew that, but I'd never heard of him, and I wouldn't want anyone to mistakenly attribute my opinions to him.

Bruce,

Great post. I have often said that my chief complaint with compatibilists is this: they have no satisfactory account of undeniable errors in human thinking about freedom and responsibility. As you note, the psychology literature makes a very strong case that our rational capacities are deeply flawed. I've highlighted some flaws that would influence the free will debate: the illusion of control, the just world belief, the fundamental attribution error, reactance, and positive outcome bias (among others). Compatibilists have little say about these these. A revisionist like Vargas does a much better job of accounting for these errors. And a "neuroti-compatibilist" like Nahmias (or, in some sense, Double) at least acknowledges the threat that these pose. Even in these cases, however, these philosophers consider the threat to be: "do these errors mean that we cannot satisfy a free will concept focused on rationality?" instead of: "do these errors mean that the free will concept requires something more extravagant that excellent rationality, something like what libertarians and impossibilists have suggested?" One joy of being a skeptic about free will (or moral responsibility) is that one has ample room for acknowledging all of these errors - we don't have to pretend that free will survived perfectly unscathed from the discovery of all these errors in human psychology.

One last point: I think that, of all these errors, the most important is the endowment effect. Once we consider that effect, in the context of constitutive luck (as described by G. Strawson and Neil Levy), we see how people might have an irrational preference for being the kinds of people they are, such that they feel no threat from constitutive luck, because the fact that they are-who-they-are is a happy coincidence. I outline that argument in more detail in this 2007 blog post:

http://gfp.typepad.com/the_garden_of_forking_pat/2007/03/free_will_and_c.html

Why should I believe what 'outsiders' have to say about my own mind? What reason do I have for siding here with Freud rather than Sartre? How much bad faith is required to deny the self-evident notion that I freely choose my activities, as well as plenty of my attitudes, values, and beliefs? As Reid noted, if I could be wrong regarding the working of my own mind, everything else must be called into question- including the very empirical research on which this form of free will skepticism is based.

Since when have we become shills for irrationality and fecklessness? Is it not our sacred obligation to rebut any attempt, empirical or a priori, to deny the dignity of man? Where is Descartes when you need him?

Hi Bruce, I think this is a great topic, in part because it reminds us that, even if determinism were an important issue in understanding whether we have free will or MR (and it's not), there's so much more for compatibilists and libertarians to think about, especially regarding the degree to which we possess and can exercise the capacities that compatibilists like to focus on but that most libertarians take to be necessary for FW and MR as well. Roughly to the degree that our conscious intentions and plans and our rational deliberations play an insignificant roles in our actions, we are not as free or responsible as we think we are.

I think philosophers tend to eschew messy questions about degrees of satisfaction in favor of looking for all-or-nothing necessary (e.g., libertarian) conditions or sufficient (e.g., compatibilist) conditions. In doing so, they miss a lot of interesting and important trees for a potentially irrelevant forest.

In fact, I was going to respond to your previous post to say that, once one understands FW and MR as degree concepts, I suspect they might come apart in really interesting ways. But I'll leave that point undeveloped for now (and pick it up in July).

In addition to the people you mention in your post, I'll add a shameless plug for my paper discussing situationist social psychology as a threat to autonomy and responsibility: http://www2.gsu.edu/~phlean/papers/Cartographies_Chapter13_Nahmias.pdf

Here follows an advertisement: my next book, to be published by OUP, is called *Consciousness and moral responsibility*. I argue on empirical grounds that consciousness is a necessary condition of moral responsibility. Since the argument is empirical, the condition is actually satisfied, though not as often as some philosophers have thought. End advertisement.

Hi Bruce,

Great and important post! I’m humbled to have my book mentioned among such esteemed company. Many thanks. For my money, this is where the truly interesting questions lie. And it is because of concerns like these that some compatibilists now admit that “free will is at best an occasional phenomenon” (Baumeister 2008b, 17; perhaps Nahmias might also agree). This is an important concession because it acknowledges that the threat of shrinking agency—as Thomas Nadelhoffer (2011) calls it—remains a serious one independent of any traditional concerns over determinism. That is, even if one believes free will and causal determinism can be reconciled, the deflationary view of consciousness which emerges from these empirical findings must still be confronted, including the fact that we often lack transparent awareness of our true motivational states. Such a deflationary view of consciousness is potentially agency undermining (at lease I argue). I would like to see more attention paid to this research (unfortunately most of the attention has gone to the findings of Libet, John Dylan Haynes, etc.).

Interesting and large set of questions, Bruce. Surely the psychological research you cite will have (and already has had) some significant ramifications for philosophical research, if only to give rise to many papers and books trying to undermine their claims to have shown what they think they've shown (we've already seen lots of this). For my part, I very much doubt that any neuro- or psychological research will pose any *new* problems for free will or responsibility: they just dress up old problems in new clothes.

But that is not to say that this new research doesn't have a valuable function. It does, which, I think, is to remind us of our humanity. So some (non-Humean) philosophers focus exclusively on rationality and reasons-recognition-and-response capacities as the be-all and end-all of freedom/responsibility, and the psychological folks come along and tell us that we actually "decide" all sorts of things "unconsciously," and what we do is deeply influenced by situational factors that affect our moods and emotions, etc. But then the psychological folks tend to make the same mistakes as those philosophers, urging that the be-all-end-all to freedom/responsibility is found entirely in the noncognitive side of the map (and so to the extent that f/r "couldn't possibly" be non-cognitive, there is no such thing).

Both extremes are just that: extremes. The right answer will (as is so often the case) be somewhere in between. But what the research does, as I said, is remind us of our humanity, that there are multiple aspects of our psychological lives that contribute to the way we operate in the world qua agents. An exclusive focus on just one of these aspects--regardless of its disciplinary source--is typically going to be incomplete as a result.

(Sorry for the abstract and vague quality of these remarks.)

Hi Bruce,

Very thought provoking post!! I have to say, however, that I am really skeptical of the science you have reviewed. I know you say your “question is not about the legitimacy of such psychological research, but about its implications.” But I am getting hung up on the legitimacy question, especially because I suspect that many readers of this blog might think this research represents a scientific consensus of some sort. But no such consensus exists. I am worried that the evidence you cite far overweighs certain types of debunking evidence from certain parts of social psychology. The full picture from a broader range of contributing sciences is far more nuanced and, frankly, optimistic. This is not the place to do a comprehensive review. But let me share an observation that I think brings out the main point.

Fields like social psychology rely heavily, but by no means exclusively, on identifying *effects* Researchers explicitly aim to uncover peculiarities and deviations from some normative background view. Let’s say that I do a study that finds that people arrive at a decision by combining information about how likely certain outcomes are and how desirable these outcomes are. It would be impossible to publish my findings in a major social psychology journal. I need to show something new, interesting, and unexpected to get published—this is simply how the incentive structure of the field is set up.

Fields like cognitive neuroscience, in contrast, aim to identify *mechanisms*. If I do a study that uncovers the specific brain circuits in which probability and utility information are combined, I not only can publish my results, they can end up in the very best journals (look at the table of contents of Nature Reviews Neuroscience or google “neuroeconomics”, for example). Cognitive neuroscientists argue for the existence of some pretty sophisticated algorithms implemented in the brain e.g., Bayesian belief updating, temporal difference valuation learning, signal detection-based motor processing, and the like. Social psychologists sometimes equate automatic and intuitive with unsophisticated and simple, and cognitive neuroscientists would tend to disagree. People should read Peter Railton’s “The affective dog and its rational tale” for an overview of this alternative picture of human rationality (it’s out in Ethics). We also have a multiauthored piece, Navigating Into the Future or Driven by the Past, out in Perspective on Psychological Science.

Let me be clear: Your post raises important and fundamental questions that need to be debated. But I find that folks in philosophy often think it is just *obvious* that the empirical evidence favors the debunking view. It does no such thing.

What David said.

Mark, my apologies, I did think the forthcoming paper was yours; an easy mistake, because the paper looks interesting, and your comments are always interesting (and that’s why I’m not a very good logician).

Kip, thanks for the kind words; and for pointing out the importance of the endowment effect, as another of the factors that subtly influences our capacity for rational evaluation.

Robert, it is always great to have your comments. We have such different views about the world, and the place of humans in that world, that your clearly expressed views always force me to think more carefully about the contrast. On one point I think we do agree (and it’s a point on which perhaps David – and Justin? -- disagrees with both of us): If we are convinced by the contemporary psych research noted in the post, then the implications are profound. I don’t think the problem is as profound as Reid suggests: it seems to me that we can still do wide ranging and reliable research; but we do must take special care to avoid various errors, and consider influences we had not previously thought important. But while this psych research will – if we take it seriously – change things, I don’t think the changes are as radical or as shocking as the Copernican Revolution, which led to deep skepticism (and justifiably so: if we can be wrong about the Sun over our heads, as John Donne wrote, how can we have confidence that we know anything?). Instead, the psych research (for example, Kahneman’s work) helps us understand how to recognize and avoid a variety of errors.

Eddy, nice comments; and special thanks for mentioning your “Autonomous Agents and Social Psychology” paper; I had not seen it, and would have been sorry to miss it. John Doris of course does a great job of presenting the situationist research and its relevance for philosophy; but anyone who wants to read an updated study of that research, together with a very detailed and insightful discussion of how the research applies to our understanding of free will, should certainly read Eddy’s clear and remarkably thorough study.

Neil, if Consciousness and Moral Responsibility is even half as good as Hard Luck, it will be a very important book indeed. I wish I could have read it before this discussion, but look forward to reading it; do you have an anticipated publication date?

David, great to have your comments on this. Certainly you are right that we can find some of the same issues if we search back through our philosophical history; but I think the contemporary research does more than “dress up old problems in new clothes”; and for me, the paper by Eddy (noted above) does a good job of showing the philosophical significance of one relatively small segment of this research. Neil’s Hard Luck also brings out some special implications, as does Gregg Caruso’s Free Will and Consciousness. But your work on identity makes very good use of the history of philosophical ideas and theories, and I have no doubt that you can find precursors or analogs of the challenges posed by contemporary psych research (though I still think it likely that the contemporary research makes these challenges in more forceful form). But in any case, your point concerning the significance of this research for reminding us of the rich full range of our humanity (philosophers do have a tendency to suppose that rationality is the only important feature) is a very important one; and in the rush to consider how this research applies to our views on free will and moral responsibility, it is easy to overlook that contribution. By the by, is this belief that the right answer is probably in the middle part of a golden mean revival that I had not heard about? John Fischer recently wrote a book (Deep Control) making a similar claim (a great book, even for those of us in the loyal opposition); the first chapter is on Deep Control: The Middle Way. Myself, I’ve always preferred Jim Hightower’s (former Texas Secretary of Agriculture) principle: “There’s nothing in the middle of the road but white lines and dead ‘dillers.” (Apologies to you, John, and Aristotle.)

I don't share David Shoemaker's confidence that new research in neuroscience will provide nothing more than a new way of posing old problems. I think current work on the Bayesian brain - to mention only, albeit central, line of research - holds out the possibility of a radical transformation of folk psychology; at best, I strongly suspect, we will find that large parts of belief/desire psychology turn out to be false. This isn't just a bet on the future, a la Churchland: the current state of neuroscience strongly indicates that much of the everyday psychology assumed by thinkers from Aristotle to (Robert) Allen is simply false. In fact, I think there is a genuine chance that the science of the mind will be so dramatically different from the theory we assumed in developing accounts of moral responsibility that we just won't know what to say.

I don't have a publication date on the book yet, Bruce. Delivery date on the manuscript is June 1, so it should go into production this year.

I had the privilege to read Neil's mss before press--it is a carefully crafted argument as fully informed by current data as can be gleaned from relevant sources as I am aware of.

Schopenhauer's Prize Essay, as old as it is, is an excellent deconstruction of any evidential basis of a rational introspective foundation of incompatibilist FW. That only leaves empirical evidence for resolving FW issues, and Neil et al takes over from there.

Chandra,
A very interesting comment, and some excellent points comparing the two areas of research (though I think there are some responses to be made on behalf of our social psych friends); just in passing, some of the most provocative "debunking" work comes from Dan Wegner, who is in the neuroscience camp. And I'm not sure I would call the research cited "debunking" so much as deeply challenging; but that's a quibble. Mainly wanted to say that I found your comment very interesting, and certainly did not mean to ignore it. Not sure why (much of this blogging stuff is new to me) but comments sometimes show up that I had not seen, and when I comment on those I did see, it looks as if I'm ignoring some of the others. Your comments -- both here and in many other posts -- are invariably much too important to be ignored.

Oops, for some reason I thought that Dan Wegner, pre-White Bears, had started in neuropsych. My mistake.

Not even Kant thought that one could be purely rational. He recognised that one is always vulnerable to being swayed by ‘the dear self’; that one cannot help seeking to promote one’s own happiness; that one should avoid subjecting oneself and others to temptation because people may be overcome by it; and so on. Furthermore some Kant scholars (e.g. Allen Wood) argue that actually Kant does not think that one should only act from the motive of duty (for example when playing with one’s children) but that it is actually okay to act from other motives (eg love of one’s children) provided that one’s acts are not in conflict with what one would do were one to act on the motive of duty, and that were they to conflict then one would rather act from the motive of duty.

Would you agree that the question that one faces is only whether to strive to become *more* rational? I take it that if reason does come to play a greater role in one’s selection of beliefs and in one’s decision-making then contingencies of one’s upbringing, past experience and physical makeup have less influence on one’s beliefs and decision-making. Clearly if this were impossible then this striving to become more rational would be pointless, so we need to ask, ‘Are there grounds for thinking that it is impossible to become more rational?’

Well when I look at my own history, I see myself as a ten year old, thinking that light objects fall more slowly than heavy objects, and being fairly self preoccupied; and I look back at myself in subsequent years reasoning, responding to evidence etc and so changing what I believe and how I make decisions. So I now believe that objects on earth fall at the same rate in a vacuum, and that air resistance affects rate of fall. And I am much more thoughtful and considerate of others, much more sensitive and aware of their perspective and seeking to act in a mutually beneficial way. I am clearly more rational, less swayed by the contingencies of my upbringing etc than I used to be, though of course still swayed to a fair extent.

I don’t know all the psych research but what I am familiar with does not indicate that it is impossible for reason to come to have a greater influence on one’s beliefs and decision-making. For instance, take the finding of a dime in a phone box. What the experiment actually shows is that some people help the passer pick up her folder regardless of whether they find a dime, some people do not help regardless of whether they find the dime, and most are influenced by the chance event of finding a dime. This suggests that people are different. Some are more swayed than others by irrelevant, contingent, considerations. Thus this way of interpreting the experiment actually gives grounds for hope that one can succeed in becoming less swayed by contingent and irrelevant factors.

Presumably one might agree that reason can come to play more of a role in which beliefs one holds and in one’s decision-making whilst recognising that there is a question mark about whether one is ‘ultimately’ responsible for this – for reasons stated in your 1999 paper ‘Deep thinkers, cognitive misers and responsibility’ and developed in most detail by Neil Levy…

Neil: I don't understand why a new folk psychology is going to affect the normative issues and questions at play in theories of moral responsibility. Furthermore, we still have the puzzles pressed by the *phenomenology* of deliberation, willing, and acting, which isn't going to change. Here are just two fundamental questions in responsibility theorizing: (1) What makes some action I perform (and my will to perform it) *mine*, in a way that the ticcing of the Tourettic individual, say, is not hers (the issue of attributability/identification)?; (2) on the assumption that I'm a natural creature, can I have whatever special kind of control/freedom is needed to render me responsible for what I do (the issue of determinism)? Now these questions won't go away with new empirical discoveries and theories, as they can easily be dressed up in whatever new language or conceptual apparatus comes along.

Perhaps, then, the theories that have been developed to answer these questions will be in trouble (I take this to be Bruce's point). To the extent that they are empirically falsifiable (as Bob Kane admits his theory is), then perhaps. But most of the philosophical equipment being brought to bear in these theories is surprisingly flexible and can probably survive radical new theories about the physical underpinnings precisely because the phenomena they are intended to capture are still going to be phenomenologically palpable. Huck Finn is still going to be viewed as reasons-responsive by some, regardless of whether the attunement is unconscious, regardless of the precise nature of the "mechanism," and regardless of whether it makes no more sense to talk in terms of beliefs and desires.

(BTW, I am not at all defending any kind of reasons-responsive view here; rather, I'm trying to say that I just don't see how new developments in the brain sciences and social psychology are going to have much of an effect on the way business is done in this arena, other than to remind people of our many non-reasoning human parts.)

Bruce: I'm a middle way guy, but I'm something of an extremist about it. Does that mean that I can, at the end of the day, avoid the dead 'dillers?

Dan, thanks for the clarification on Kant and recent research on Kant. I would certainly agree that one can strive to be more rational, and sometimes people succeed. Daniel Kahneman claims that most of his research has been aimed at making people more rational (he says that it has not really helped him to become more rational himself, though it has improved his ability to recognize errors in others; but I think he’s being a bit facetious). Al Bandura believes that there are ways to strengthen our sense of cognitive self-efficacy, which would help us think longer and better. Roy Baumeister has recommended ways to combat ego-depletion. So if someone studies their work, and exerts the effort, then rationality can be enhanced. But I don’t think that gets us much closer to moral responsibility, since whether one has the knowledge that one’s rational abilities need improvement, and whether one has the confidence and the self-control to engage in such efforts, and whether one is guided by someone like Kahneman rather than a self-help charlatan, is a matter of luck (as Neil would argue). Still, I agree that it is very important to recognize that people can and do become more rational; I just don’t think that supports moral responsibility (but from my perverse perspective, that’s an advantage).

David, an extremist about taking the middle way? I think you have discovered an uncharted philosophical path. Opens up new possibilities for philosophical classifications: you could be a trenchant golden meaner, and John could be a mellow golden meaner.

David,

*Does* my action belong to me in a way that the ticcing of the Tourettic individual does not? Folk psychology says yes, but if it is a mistaken theory then we should be no more confident of its verdicts than we are of the verdicts of the theory of humors. Peter Carruthers has already argued that there is no criterion of action identification that would allow us to make this distinction, on cognitive scientific grounds. More radically, but not entirely speculatively, one might think that the notion of the Tourettic individual must go, because there are no individuals, just collections of very loosely integrated mechanisms. It is quite possible that under some circumstances mechanism a of "individual" 1 is better integrated with mechanism a of "individual" 2 than mechanism a is with mechanism b with which it happens to share a brain. The perception of unity may be a user illusion. As a matter of fact, I think the hypotheses just mentioned probably are wrong, but they are not crazy right now. I have argued in propria person that reasons responsiveness is going to turn out to work quite differently from how we imagine, so that our intuitions about it are very often wrong. It is intuitive that if $50 motivates you to a, then so would $75 and also a loss of $100 if you don't a. I think there is already evidence that we shouldn't be confident that that's the case. A last word on the phenomenology: I think it is extremely likely that to the extent to which it is not theory driven (in the ordinary person, to just the philosopher) it does not match up at all with the actual mechanisms. Right now, in brief, I think we already know enough to worry that our intuitions about counterfactuals are very often mistaken, not in marginal cases but in central cases.

Neil: Deeply ingrained social practices and commitments weren't built in to the theory of humors.

I think the main issue here has to do with meta-theories about responsibility and the like, i.e., what are the relevant data points for which we need theoretical explanations? How do we engage in theory-building and defense? What are our fixed convictions? I think these are purely philosophical matters, and the kinds of arguments we deploy will be philosophical and not subject to the kinds of empirical data you and Bruce are pointing to, with the exception being what I have already mentioned, namely, its role as a reminder of our rich human nature. We are reasoning *and* feeling creatures, impulsive, moody, and influenced by aromas and dimes in surprising ways. Further, certain of our interpersonal relations presuppose various sorts of attitudinal commitments, attitudes that seem awfully like responsibility responses. The question of the role these should play in our theorizing about responsibility is controversial, but it's surely a purely philosophical question. This will also be the case if our folk psychology changes radically: we will still have to determine the role this conceptualization ought to play in our theorizing.

Now it's from these meta-theoretical positions that we take that we do our actual responsibility theorizing, coming up with, e.g., reasons-responsive theories. Now you may well be right: reasons responsiveness could turn out to work quite differently from how we currently imagine. But these will be differences only in the application details of the theory and not in the theory itself. So perhaps we are simply talking past one another: I don't think the empirical material will make a difference at the theoretical level, whereas you think it will make a difference at the application level. I certainly agree with the latter, though. I took Bruce's question to be about the former.

Incidentally, I long ago argued that, for purposes of certain of our practical concerns, we could not individuate persons in the way we once thought, and I did so entirely without appeal to cog sci. So too one might make the case against action individuation without such an appeal (and people have done so), as could one do so about the nature of counterfactuals. The empirical discussions may present these worries more forcefully, or in dramatically new ways, and I'm all for it for those reasons. But I don't yet see them as raising distinctively new issues.

David, thanks for a very interesting comment. You’re right, I do think that contemporary research in psych – both in social and in neuropsych – will have a big impact on our view of free will and moral responsibility. Certainly in the way you describe, concerning our understanding of our humanity; but in larger theoretical ways as well. But I could be wrong about that, for precisely the reasons you note: the moral responsibility system is an enormous and very flexible system, and – as Quine taught us – when a system is challenged, it is possible to make changes at various points, and keep the basic principles of the system intact by making adjustments elsewhere. There’s a lot of that adjusting now underway. Dennett worries about “creeping exculpation,” and so proposes that we draw the line and ask whether people are willing to acknowledge their own responsibility, as the price for playing the social game. P. F. Strawson wants to think in terms of reactive attitudes – that cannot be eliminated – rather than just deserts. Sher even suggests that we give up our principle of fairness in order to preserve our belief in moral responsibility. Van Inwagen is willing to drop his cherished libertarian principles if that is required to save moral responsibility. So certainly it is possible that we’ll make adjustments elsewhere in the system, and hang onto moral responsibility at all costs. But what the vast array of psychological research shows (from my perspective) is that the adjustments will have to be substantial; and it seems to me that the adjustments are becoming sufficiently cumbersome and ad hoc that we are better served by a system that drops moral responsibility claims, and takes a clearer look at how our beliefs and characters are formed, how our behavior is profoundly influenced by factors of which we are unaware, how deep and early-formed psychological characteristics (such as internal locus-of-control and level of need for cognition) control our subsequent choices and acts. But the moral responsibility system is vast, powerful, and deeply locked into our culture (particularly our neoliberal culture) and our system of justice; and so there is no guarantee that any psychological research will lead to the adoption of an alternative system.

Ultimately, I suspect that we are not “talking past one another” when Neil and I disagree with you on this; rather, I think you have pushed the discussion into a very deep issue indeed: the degree to which philosophical questions are insulated from empirical inquiries. Thomas Nagel, in Mortal Questions (in a very brief essay on “Ethics Without Biology”) argues that “ethics is a subject” with its own rules, its own discipline; and biological research could no more change our basic ethical principles than it could change mathematics (though it could, for example, show us that we should change our treatment of trees if biological research shows that trees can suffer). Somewhere (I don’t have the source at hand) Virginia Held takes a very similar view in a response to Mark Johnson’s musings on cognitive science (I think it’s in the Larry May volume on Mind and Morals). Are our differences on this issue of a similar nature?

Now we're cooking with HOT grease!

Thanks a lot for your very helpful and insightful response, Bruce. Yes, at the end of the day, I really do think the issue is about whether (and to what extent) philosophical questions may be affected, altered, revised, etc. by empirical results or inquiries. This is indeed a very difficult issue. In general, I'm all aboard the X-Phi train, and I read all sorts of social and experimental psychology stuff, but at the end of the day (as I've already indicated), I don't find the resources in it for the radical revision of various theoretical positions that were promised, foreseen, or hoped for.

This is in part due, surely, to the fact that much of what's being done, at least on the free will/responsibility front, is working out the nature of our conceptual commitments, especially including their presuppositions. So what if the presuppositions of our commitments about the nature of "responsible agency" cannot obtain in agents like us, information we discover from empirical work? We can do one of four things: abandon the commitment, fiddle with the nature of the concept to which we remain committed (as the responsibility concept is *very* flexible), reorient our understanding of the concept, or jettison some general principle like "ought implies can." But this is a normative, philosophical decision, and I don't see how this decision could be informed by any empirical work. You, Bruce, seem to have adopted the first option, given that the costs of retaining the commitment are high and doing so feels ad hoc. Hard determinists do the same, albeit for perhaps different reasons. Compatibilists generally might be thought to have adopted the second approach, whereas Strawsonians in particular have adopted the third. Sher and some others have adopted the fourth.

There are reasons in each case why the adopted strategy is the best (or the least bad). But again, I can't see how empirical work will play a role here. And to the extent that I think this is where the real action is in responsibility theorizing, it guides my thinking that the empirical work will only be relevant at a lower level of abstraction.

David, if you are "on board the X-phi train," then really get on board--wouldn't it be useful to get empirical information about what people's (culture's) commitments and presuppositions are and in what ways they can be 'fiddled with', and how flexible or revisable they are (and what counts as revision anyway), and what mistakes people (cultures, philosophers!) are prone to make about their commitments, concepts, and practices (e.g., which intuitions are generated by unreliable processes), and how best to alter our practices when it is deemed necessary (e.g., in light of scientific discoveries), and ... ?

You get the idea, but lest anyone misinterpret my question--I certainly don't think that (a) surveys of the folk serve as the only or best method to get all this empirical information or (b) that this empirical information is all we need to solve the philosophical debates, including the one engaged in here about whether scientific discoveries about human rationality, etc. can or should reshape our commitments and practices (e.g., regarding responsibility). Like you, I am an extremist for avoiding extremes, including regarding methodology. Wide reflective equilibrium all the way!

David (and Neil/Bruce/Eddy etc.),

Regarding the hot grease, here’s three claims. 1] At present (and perhaps for the foreseeable future), cog sci will not deliver the kind of data that could substantively affect/alter/revise philosophical discourse about moral responsibility. 2] Because of how human beings are constructed, cog sci could not in principle substantively affect/alter/revise philosophical discourse about moral responsibility. 3] Cog sci could not in principle substantively affect/alter/revise philosophical discourse about moral responsibility.

Seems like Neil and Bruce doubt all three (I probably do too). You seem committed to 1], which is a claim about the immaturity of current cog sci. But 1]’s not very strong, and I think there’s a fun debate people might have about it. What about 2] and 3]? 2] involves an empirical commitment, but I’m not sure our psycho-functional theories are refined enough to justify it. 3] is the full-on Philosophy First picture. I doubt 3] is true simply because it’s too strong. It might turn out that folk psychology is on the right track, and that all cog sci will provide are refinements to that picture. But if folk psychology is not on the right track, as Neil is arguing, then data about just how *different* the human machine is from what we’ve assumed would seem to bring along with it all sorts of theoretical discomfort, right?

-Mainly, I’m trying to get clear on just how much your kind of perspective depends on a view about the adequacy of folk psychology.

There could also be a debate about what ‘substantively affect’ etc. means. You seem to take a Philosophy First view towards the possibility that science shows “the presuppositions of our commitments about the nature of "responsible agency" cannot obtain in agents like us.” But I’d think that if science required that we change what we mean by moral responsibility (fiddle with the concept, abandon a central commitment, etc.), this counts as substantive.

Thanks, Eddy. I'm already really on board, as I certainly seek out all the sorts of information you note and more. The question here, I take it, is what to do with it all, and on what basis do we make that decision? (And you note that empirical info isn't all we need to answer that question as well.)

The reason this discussion is so interesting and important right now for me is that I'm writing a book on responsibility (_Responsibility from the Margins_) in which much of my time is spent reading about the psychological and biological details of real life "marginal agents," those with Tourette syndrome, OCD, clinical depression, psychopathy, dementia, autism, and so forth. What I've come to believe is that these sorts of cases can (perhaps at most) help us figure out the application conditions of our theories of responsibility (e.g., the necessary/sufficient conditions for agential non-exemptions), but only under the rubric of a philosophical analysis of the nature of responsibility and an antecedent theory, along with some upfront admissions about the relevant conceptual commitments and whether these play some universal role. So empirical information will also bear on this very last issue, but again, what is to be done with it is a philosophical issue (I can imagine some -- and have read some! -- who couldn't care less that some commitment isn't shared more widely and so are happily conceptually chauvinistic). It matters to me that my theory be psychologically realistic, however, and so it also matters whether there is a subset of universal emotional responses (sentiments) to certain features of agency that reflect "our" commitments about responsibility. But of course, this is a stance and methodology I've had to argue for, and empirical considerations weren't relevant at that stage.

Bruce: Looks like my comments have been relegated as spam. Here's another try.

Thanks, Eddy. I'm already really on board, as I certainly seek out all the sorts of information you note and more. The question here, I take it, is what to do with it all, and on what basis do we make that decision? (And you note that empirical info isn't all we need to answer that question as well.)

The comments to this entry are closed.

Books about Agency


3QD Prize 2014: Marcus Arvan