Blog Coordinator

« FSU Grad Student Conference on Free Will, Moral Responsibility, and Agency | Main | Welcome to the new Flickers of Freedom! »

07/30/2013

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Josh,

Thanks.

Two comments.

A. I think incompatibilists have to wrestle with the great work Eddy has done. At the least, incompatibilists have to consider the serious possibility that many or even most incompatibilists are incompatibilists for bad reasons. That doesn't mean that all incompatibilists are guilty of bypassing, and doesn't many that the *best* incompatibilists are. But it is important data nonetheless. I, for one, have updated my beliefs about incompatibilists and bypassing because of his work.

B. I haven't read your paper yet (although I read Shaun's great paper on bypassing - which hasn't received enough discussion on this blog). But I can speculate. I think what Eddy's data (and your data, and Shaun's, etc.) is showing is (likely) that:

B1. the folk think about decisions as exceptions from the normal causal order
B2. because the folk think about decisions this way, they think decisions require indeterminism
B3. this requirement for indeterminism is so strong that the folk cannot meaningfully entertain deterministic scenarios: either they think of determinism and picture a world where people don't meaningfully choose, or they think of indeterminism and allow people to meaningfully choose, but not both

I could be wrong about all that. People *do* seem to think about determinism correctly for some actions (as you note), and even for some (rudimentary, pedestrian, unimportant?) choices. But the more important, or morally consequential, a choice is, the more it seems to be impossible for those to occur in a deterministic world.

I'm not sure if Eddy would agree with B1-B3. What does seem to be happening, and what I hope Eddy and Shaun can agree about it:

C1. the folk seem to be making a mistake in understanding deterministic scenarios
C2. the folk seem to be basing their indeterminism on that mistake

The question then is: do C1 and C2 support compatibilism or its opposite? Eddy says they support compatibilism, because they provide an error theory for incompatibilism. I'm not convinced: it seems to me that at least some errors can exist without destroying corresponding beliefs. For example, we might have made an error about whether free will is valuable. But, as Pereboom notes in the intro to Living Without Free Will, even if that is true, free will might still not exist.

Some of an agent’s actions are attributable to bypassing, whereas others are not.

If you attempt to attribute *all* of an agent’s actions to bypassing, then you must be willing to believe that it’s simply an illusion that one thought within your physical brain is capable of affecting or influencing another thought within your brain in an intelligent manner (i.e., you must believe that your thoughts aren’t capable of exerting *any* control).

It can’t be both ways, wherein all of the activity within your physical brain is controlled *solely* by the four fundamental forces of physics in a predeterministic manner from the bottom-up (i.e., 100% bypassing is true), while at the same time one thought within your mind is able to interact with another thought in an intelligent manner (unless you believe human intelligence is somehow innate to the 4FFOP). The two ideas contradict one another.

Therefore, since we all know that it’s not an illusion that one thought is capable of affecting another thought, bypassing cannot be attributed to controlling 100% of an agent’s actions.

Thanks, Josh. A quick question first. Your understanding of bypassing strikes me a little like the standard view of fatalism: you can go to Samara or you can go to Baghdad but either way death is going to get you. Do you have any thoughts about the connection between bypassing and fatalism?

One reason I ask is that you can't get from determinism to fatalism by first-order logic alone. So one supposition might be that the folks are making some sophisticated modal fallacy. Is there any way to rule that out by experiment?

Joe,

This comment gets right to the heart of the issue. If we looked only at the case involving free action, it would be natural to suppose that people's intuitions in that case were the result of some kind of straightforward error in modal reasoning (say, conflating determinism with fatalism). But the striking fact is that people give exactly the opposite response in the bodily movement condition. So, in light of that difference, I'm thinking that we now have reason to conclude that the original result wasn't actually due to an error in modal reasoning (which would have applied equally in both cases) but rather to something very specific about how people understand free action. Does that sound right to you?

Kip,

What you say here makes a lot of sense, but I am thinking that maybe bypassing isn't best understood as a 'mistake,' at least not in any straightforward sense. As I emphasize above in my response to Joe, it doesn't seem like this is just a straightforward case of getting confused about determinism. It seems instead to be revealing something about how people ordinarily understand free action.

Of course, you might think that people's ordinary understanding of free action is incorrect, and in that case, you would say that their answers to this question are wrong. But even then, these answers would not be the result of people just misunderstanding what determinism is; they would be genuine applications of a conception of free will (though one that you regard as incorrect).

James,

The key question here is precisely whether it is true that if everything is determined and explicable in terms of the laws of physics, human behaviors can't also be caused by psychological states. Why exactly can't it be that a behavior can be explained in terms of physical laws but is also caused by psychological states?

Josh,

There are two kinds of mistakes that are possible here:

A. mistakes about how the universe and human choice would actually work if the world was deterministic
B. mistakes about what the "concept" of free will or free choice actually constitutes

I think it's fairly clear (based on work by you, Eddy, Shaun, and others) that the folk make type A mistakes, at least sometimes.

In reply, you seem to be reminding me that the data might show how people *accurately* (not mistakenly) think about free will or free choice. In other words, you are saying: even if people are making a type A mistake, they are not (necessarily, or even likely) making a type B mistake.

I think this is exactly right. I agree with you: I would not be surprised, and mostly expect, that there is some module (broadly understood) in the human brain that processes human (moral) choices in an indeterministic manner. Humans might have evolved such a module, or such a cognitive illusion, for a variety of reasons, including the ones that Tamler outlines in one of his arguments (i.e. blinding people to the causes of human choice might make them more successful at holding people accountable for wrongdoing - I also think there are 10+ other ways these modules might have evolved).

Of course, in Eddy's defense, the fact that people don't make a type B mistake (if they don't) doesn't mean that they don't make a type A mistake. I think it's fairly uncontroversial that they do make a type A mistake. Eddy can still argue that this type A mistake presents an error theory for incompatibilism.

Some lingering issues:

1. what if the type A mistake motivated, or explains, the "correct" belief about the concept of free will? Is the concept still "correct"?
2. suppose that people think that the concept is indeterministic, as outlined above? Is this belief "basic" or essential to the concept, or does it represent a misapplication of an underlying concept? Might that concept be compatibilist?
3. what about a Manuel Vargas-type revisionist move, which acknowledges the type A mistake, and uses it to motivate some kind of revisionism about free will, either to the concept itself, or its application?

etc. It gets complicated quick.

Josh,

Thanks for your feedback.

I don’t believe it’s possible to explain human behavior *only* in terms of physical laws, but I think behavior can be explained in terms of physical laws *and* psychological states.

When neuroscience examines the activity within a physical human brain and attempts to explain human behavior, there’s a fundamental human reference issue involved. Neuroscience is only able to perceive of the *result* of the net sum of forces after it has already occurred for each moment of time. That means they’re unable to sense any of the individual forces that are causing the net activity within a brain. Here’s where I’m going with this:

If it’s true that psychological states are higher level emergent entities which exert new emergent forces that add together with the laws of physics (4FFOP) inside a physical brain, neuroscience won’t be able to perceive of those new forces. Instead, neuroscience will simply conclude that *everything* they’re looking at is controlled by the 4FFOP.

So that takes us back to the point I’m trying to make: I believe we can be certain that psychological states (i.e., thoughts) exert new emergent forces that aren’t simply a sum of preexisting forces, because human intelligence isn’t innate to the 4FFOP.

If we take a step forward and believe that new forces are exerted from the “psychological state”, then we have a good reason to believe in agent causation and that free will in the strong sense exists (i.e., an agent may choose a different path due to new emergent life existing within him).

Hi Josh,

Your paper is characteristically Knobeian (which involves being clear, interesting, and provocative, although I wouldn’t presume to give a full account of ‘Knobeian,’ in part because I suspect it is a property that eludes analysis). I’m drawn to the general picture contained therein. A brief reaction:

1] Are you saying that the transcendence view, which you argue most people possess, is non-causalist? You say ‘The key idea is just that, on the transcendence vision, an agent can do something for a reason even when the resulting action was freely chosen and not caused by anything at all.’ That sounds non-causalist to me. But I’m not sure the view most people hold will turn out to be consistently non-causalist. It might be that they accept a causalist account of most intentional actions, for example. And it might be that they accept certain kinds of causal influence in general. I bet people would accept the causal influence of character traits, and also of values, on actions, even in a deterministic world. I also bet people would accept the causal influence of beliefs and desires once the content of those mental states is more fully specified (e.g., in some story involving Jane, ‘did Jane’s desire for her son to go to college have any effect on her A-ing?’).

1a] Is it possible to see your view as a version of the bypassing hypothesis? You suggest the folk aren't making a mistake about determinism, but something like bypassing might be happening anyway - the bypassing of the reason-story people see as crucial for action. This isn't quite Eddy's thought, but I can see a way to get to an error theory about incompatibilism from here nonetheless.

2] I suspect you’re onto something. A natural thought is that the transcendence view manifests most clearly at the moment of decision/choice. Do you agree? You emphasize the difference between choosing for reasons and a decision’s being caused by a belief. I do think this contrast is important, and figuring out the folk view of deciding (what it is to decide, how decisions normally happen, etc.) and its relation to the folk view of free will ought to prove illuminating (and difficult, given the tangled mess of issues that quickly crop up when one talks about deciding, reason-explanations, and etc.).

In fact, am I correctly remembering that Eddy and Dylan had some studies on choice? Have those been published, or am I misremembering?

Hi Josh, Josh,

I was also understanding the transcendence vision in terms of the view that reasons-explanations are non-causal. Is that right? If so, it doesn't seem that plausible to me. We certainly seem to be happy using causal language to explain how people's reasons-responsive mental states influence their behavior. At any rate, I think one of the interesting things developing from this paper and Eddy's recent post is the suggestion that people don't just make the bypassing mistake regarding any kind of causation, but specifically rational (mental) causation. Here's an alternative hypothesis: people don't think reasons-explanations are non-causal, they just think they involve a different type of causation from brute, mechanistic, physical causation, and they think those two types of cause/explanation can conflict. So when you tell them that some behavior is (completely) physically caused, they conclude it can't be rationally caused. That explains why we see the bypassing mistake when asking about decisions/beliefs/desires and intentional actions, but also why we wouldn't see it when asking about emotions and facial expressions, given that the latter doesn't seem to involve reasons-explanations at all, and so doesn't raise the possibility of a reasons-explanation being "excluded" by a brute physical explanation.

Hi Josh and Dylan,

Thanks for these super-helpful comments. I definitely feel like we are gradually converging toward something really interesting here.

I see how it might initially seem implausible to suggest that people don't see psychological states as causing behavior, but it's important to emphasize that the key distinction here is between specifically *causal* explanation and other kinds of explanation. So the idea is that there is a difference between (1) and (2).

(1) He went to the kitchen because he thought there was a beer in the fridge.
(2) His belief that there was a beer in the fridge caused him to go to the kitchen.

Sentence (1) is clearly fine and unexceptionable. The claim, however, is that sentence (2) says something further that would tend to make people resist it. Even though the agent's belief can obviously explain his behavior in some sense, I was thinking that people might be wary of saying that his belief actually *causes* his behavior. ('The belief can't truly be *causing* him to do that! Even given the belief, he could always just have decided to do something else.')

I'm not sure if this suggestion involves any deep disagreement with Josh's claim that a psychological state can 'affect' behavior or with Dylan's claim that the issue here only arises for 'brute' causation. Perhaps there is a real substantive dispute here, but it might also be that we are just using these words in slightly different ways. So I'd be interested to hear any further thoughts you two might have.

p.s. A big thank you to Josh for his very kind words!

Hi Josh,

Thanks. I think get the idea better now. What do you make of finding that most people disagree that bypassing occurs in the concrete cases (and the abstract rollback case where we tell them that bypassing doesn't occur), though? On the transcendence view, shouldn't they agree with bypassing for *all* reasons explanations? We use the language of "beliefs/desires/decisions having no effect on what the agent ends up doing." Do you suspect that 'effect' patterns with 'because' and people would agree with the bypassing questions in these cases if they were phrased in terms of *causal effect* instead?

I think you're right that the difference between the transcendence and "rational bypassing" views might be pretty thin for many purposes (though the latter might commit the folk to a less spooky metaphysics. Out of curiosity, what do you see as underlying the 'because' in reasons explanations if not a causal relation? Can we say anything more about what sort of relation it is besides "rational"?) In particular, I have a hard time seeing how the two views have very different consequences for the traditional philosophical debate about free will and moral responsibility. On both views, it seems true that determinism, as such, isn't what's incompatible with free will and moral responsibility. Whether the real threat is from *causation* or *physical causation*, deterministic (physical) causation doesn't present any more of a threat than indeterministic (physical) causation, right? Isn't that a victory for compatibilists? (Maybe it's pyrrhic, insofar as (physical) causation of any kind turns out to preclude free will and moral responsibility, but they'd still be right - as far as the folk are concerned - about *determinism* itself not being the threat.)

Oh, and Josh S.: yes, Eddy and I have some data on "choice" and "ability to do otherwise" in Study 2 of the PPR paper, but they don't really pinpoint where exactly the worry in this vicinity crops up (e.g., in making decisions vs. the having and influence of more basic desires, etc).

Josh and Dylan,

I wrote this before Dylan's response, but I have little to add to Dylan's response except astute nodding and looks to Josh for his response.

I’m not thinking of non-causalism as a kind of anti-causalism. Some defend a kind of non-causalist view on which, even though mental states might be causally efficacious in producing action, they don’t rationalize the action. I’m not sure how many non-causalists would agree with this, but one way to think about it is whether any causal condition is necessary in order for an event to count as an action. If not, then even if the action is a part of the causal order in some way, that’s irrelevant to whether it is an action.

It looks to me like Josh is saying the folk are non-causalists. That’s a very interesting claim, and it might be right, especially regarding decisions (or something close enough to it might be right, such that it’ll prove fruitful to explore the matter further, irrespective of the philosophical debate between causalists and non-causlists).

Dylan raises a question about that though, and it’s worth noting that when I hear Josh’s prompt about beliefs causing action, it sounds like an incomplete story about the action. It’s not *just* the belief that causes the action. It’s the belief plus a desire, and for me plus an event of deciding – of forming an intention to go get a beer – that causes the going to the kitchen. Talking of action causation in terms of the belief (or even some belief-desire pair) seems to leave out the active component of the whole thing, namely, my intentional action of deciding to go get the beer (to be clear, when it’s actually me involved, the going to the fridge for beer is by now fully automatized, but I can remember a time when things were not this way).

Now, I think I can give a causalist story of the active component (actually have a paper on it I’m finishing up), but many doubt this can be done, and at any rate I doubt the folk have a detailed view about the causal portion (I do think they have a robust folk psychology of deciding, which we know little about at present). Maybe their view is that talk of causation here is somehow out of place. If so, I’m not sure the belief prompt shows it – since it leaves out the active component.

One suspicion I have – the causation bit isn’t as worrying to folks as is the thought that something-not-amounting-to-them is doing the causing (like a stray belief). Josh’s prompt puts folks in a 3rd-personal frame of mind about their decisions, while the natural view of decision-making comes packaged 1st-personally. The 3rd/1st person distinction is in the neighborhood of Josh’s view, but it isn’t quite the same – it doesn’t commit to the folk view of action involving all of the normative stuff inherent in talk of choosing for reasons (although of course it could turn out to be consistent with Josh’s view). I suppose, now that I’ve written it, this is in part why I think investigating folk views on (free) deciding should prove illuminating – one wonders what the folk metaphysics of deciding really are, and what they might portend for various views in the free will literature.

I'm excited to begin the discussion of Carolina's very interesting post, but first, a quick response to these last two comments...

Dylan,

Although we may still disagree on a few points, it definitely seems like we are beginning to converge toward a shared view here. Just two further thoughts on your most recent suggestions:

1. You are completely right to say that the hypothesis I've been developing does not point to any kind of problem involving *determinism*, at least not as that notion is understood within contemporary philosophy. Instead, the central problem is specific to *causation*. So the hypothesis predicts that the worry about free will should arise specifically when people are told that human actions are caused by past events and not when determinism is presented in other ways (e.g., in terms of perfect predictability or rollback universes).

2. You are also right to say that a key question arises as to how different reason explanation is from causal explanation. If the two end up being very similar (say, with reason explanation just being a minor variation on causal explanation), it would make sense to think of these as simply being different varieties of causation. But of course, if the two are radically different, it would make more sense to say that people don't think of psychological states as 'causing' human actions in any way.

Josh,

You are completely right to say that the view I develop is importantly different from the one championed by contemporary non-causalists, and I think you are getting at exactly the key respects in which the two come apart.

At the most general level, the key difference is that contemporary non-causalists are trying to construct a view that reconciles people's ordinary practice of explanation with a broadly naturalistic picture of human action. By contrast, my claim is that people's ordinary practice is fundamentally in conflict with the naturalistic picture. So the approach I suggest to understanding ordinary explanations does not at all serve to reconcile them with the kinds of explanations people give in the sciences.

Once one takes that approach, it becomes possible to give a pretty straightforward answer to your questions (though perhaps not an answer you would find congenial). The idea is that our ordinary practice of explanation does involve the idea that human actions are caused, but it doesn't involve the idea that these actions are caused by psychological states. Instead, the actions are simply *caused by the agent herself*. (Of course, this only makes sense if one assumes that the agent herself is not just a collection of psychological states, but I think that recent experimental results provide evidence that people do make that assumption.)

Thanks Josh.

I accept that "bypassing intuitions" seem to "involve something very specific about free human action." But why can't the fatalistic (for lack of a better term) conclusion be the result of a fallacy? Why can't something like the following explanation work?

"People do OK with modal reasoning in general. But they think that the modalities relevant to human agency are special. For instance, many think that human agency requires some kind of ultimate sourcehood, or a strict sense of 'could have done otherwise.' Such conclusions, though, are the result of fallacious modal reasoning -- dependence on fallacious arguments like the consequence argument."

Hi Josh, thanks for posting! Just to offer my summary of this discussion, we do seem to be converging on three important claims:
(1) When it comes to FW and MR, determinism per se is not the main concern of ordinary people--rather, people think FW and MR require that *selves* (or their relevant features) play the right sort of causal role in decision-making and action, and people take certain types (or presentations) of causation to be threatening when interpreting them to involve bypassing of the agent's self or relevant features of it.

(2) People have different intuitions and theories about causal relations among events in the physical world vs. causal relations among events within agents' psychology (or between agents and events in the physical world).

(3) And of course, we believe empirical information about humans' psychological processes underlying these ways of understanding the world are relevant to the philosophical debates in various ways (not the least of which you helpfully remind us--philosophers always have been, and should be, interested in how the human mind works!)

Nonetheless, we still have significant disagreements about what the existing empirical information suggests about (1) and (2). You believe people think selves are 'transcendent' (non-physical?) and outside the causal realm, yet able to cause things in that realm. I think most people have a less theoretically robust understanding of mind and self. They (rightly!) think that mental processes (including reasoning) are not just like physical processes in lots of interesting ways but that does not mean they think they are non-physical or non-causal. Biological processes are also unlike physical processes in lots of ways but we now have theories to help us understand their relationship.

As I discussed in my posts, we still lack such a theory to help us understand the relationship between reasons and (physical) causes, but unless they are presented as competing, most people do not seem to take complete causation to bypass the self (or reasoning and reasons).

We need more data, of course, but note that the rollback scenarios do include (complete) causation, including the ones contrasting psychological and reductionistic causes (in paper with Coates and Kvaran), and the brain-scan studies I posted suggest physicalism (and most people say they entail that brain activity causes all decisions, yet say they do not rule out FW), but we're trying ones with stronger anti-dualism language now. Let's try to figure out what sort of evidence would show that most people think of the self as an entity that exists independently of both (a bundle of) bodily and psychological features.

Very nice, Eddy. Thanks. This empirical stuff is always difficult for me to wrap my head around.

Just to be clear, when you say "people think FW and MR require that *selves* ... play the right sort of causal role in decision-making and action" you are speaking loosely. You can allow for non-causal theories of action, right?

Which brings up the question: What is a good term covering the "causal" relationship between agent and action that doesn't prejudice the causalist/non-causalist debate?

Hi Eddy,

Yes, it seems like we are really converging on a shared approach here. In fact, looking the areas where our views still seem importantly different, I feel like some of the apparent disagreement is actually just due to an unclarity in my original paper (which you rightly criticize in your published response: http://philpapers.org/archive/NAHANV).

My assumption is that people's minds include certain psychological mechanisms that generate intuitions about the self, but in describing these mechanisms, I fear that I sometimes (perhaps misleadingly) characterize them as constituting something like a 'conception' of the self.

More specifically, my claim is that these psychological mechanisms are constructed in such a way that (a) people have the intuition that the self can cause voluntary actions but (b) people have the intuition that prior states and events, including psychological states, do not cause the self to perform these voluntary actions. In the actual paper, I sometimes express this point by saying things like: 'People hold a conception of the self according to which psychological states do not cause the self to perform voluntary actions.' But perhaps this is a bad way of putting the point. I definitely didn't mean to suggest that people have a coherent view about the self (say, as a physical or non-physical entity) from which these intuitions were being derived.

Having said that, does my view still count as a rejection of the approach you are suggesting?

Joe,

You are completely right to say that the responses people give in these cases might be the result of an error or fallacy, but as Kip helpfully points out in his comment above, it is important to distinguish between two different kinds of errors here.

On one hand, there is the possibility that people have a certain conception of agency, but they are failing to correctly apply it to the case of a deterministic universe. Then, on the other, there is the possibility that they are correctly applying their conception of agency, but that conception is itself a mistaken one.

I was only trying to argue against the first of these two possibilities. But you could very well opt for the second one. So you could say that these bypassing intuitions are an expression of something fundamental about people's actual conception of agency but that this conception is simply incorrect.

The comments to this entry are closed.

Books about Agency


3QD Prize 2014: Marcus Arvan