Blog Coordinator

« The plan | Main | The Consequence Argument: Exciting News »

07/02/2014

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Hey Josh,

Really great paper and post--I enjoyed both a lot.

I'm sympathetic to [1] and [3] as well, though I think [1] is a bit delicate depending on how you flesh out 'fundamentally distinct', so I'm looking forward to hearing about the ways in which you take control over action to be different from control over choice.

Below, for what they're worth, are some initial reactions in response to your "have I left anything out?" query:

It seems to me that in both the folk understanding of control and in the more theoretical discussions that take place in the philosophy/cog sci literature, there is an important aspect of control that relates to one's ability to successfully *refrain* from performing a given action. (In cog sci speak, this might be cast as something like the inhibitory function of executive control.)

Now on your account of control, we're told what it means for an agent to have control relative to a certain intention. So suppose we have an agent, an addicted smoker, with the intention to have their customary post-dinner cigarette. You give a nice explanation of what it means for the agent to have control over the cigarette-smoking behavior they perform in the service of that intention.

What seems to be left out of this picture, however, is the lack of control the agent has with respect to *not* having that cigarette, i.e., with respect to refraining from acting on the intention to have one.

I think this needs more filling in, but what I've said briefly here might at least motivate two worries, the first to the effect that you've left something out of an account of the foundations of agency, insofar as this may require the kind of inhibitory control I'm gesturing towards, and the second to the effect that you've left out free will even for some compatibilists--regardless of the truth or falsity of determinism.

I'd be interested in any thoughts you might have about that! (A quick one of my own: Maybe this is where the distinction between control and self-control comes in, where your focus is solely on the former, and mine seems to lean towards the latter. But if you did want your account of control to cover inhibitory control, do you think this could be done by adding certain counterfactual scenarios--or dispositions if one likes those better--involving an additional intention or motivational state to refrain from performing the action specified in the agent's intention and then running PC on those?)

Hi Myrto,

Awesome. A comment about worry 1 and worry 2, and then another comment.

About worry 1 (agency requires inhibitory control, and my approach might not capture it): We might sometimes exercise inhibitory control in the absence of a competing motivational state, and as a part of a run-of-the-mill exercise of control with respect to some intention. It’s not clear to me we need to posit motivational mental states to explain utilization behavior, for example, and if not, one might exercise inhibitory control over wayward perceptual-motoric mechanisms in the absence of competing motivation. BUT. The more interesting cases are definitely those involving competing motivational states. I do rather like the way my account deals with these kinds of cases (big surprise!), although I also think you are right that many issues that arise here are addressed in the self-control literature and are not addressed in my paper. One thing I don’t say a lot about in that paper – and one thing I don’t think we need as a ground-floor requirement on agency – is what to say about which motivational states are more normatively significant in cases of competing motivational states (i.e., should the agent throw her weight behind state A or state B, and why?).

Maybe this relates to worry 2 (that control of type-1] leaves something crucial about even compatibilist free will out of the picture): are you thinking of free will as requiring a kind of deep self condition? If so, I could see that. I suppose there I’d just say again that this kind of free will seems to be an add-on to agency – we can have an agent who exercises control over her behavior, even if that agent fails to meet any deep self condition.

A comment about when you said ‘suppose we have an agent, an addicted smoker, with the intention to have their customary post-dinner cigarette. You give a nice explanation of what it means for the agent to have control over the cigarette-smoking behavior they perform in the service of that intention. What seems to be left out of this picture, however, is the lack of control the agent has with respect to *not* having that cigarette, i.e., with respect to refraining from acting on the intention to have one.’

Basically, yeah, I get the worry here. One way to put it is this: maybe what any type-1] account of control doesn’t capture is ‘control over options.’ I didn’t mention in my initial post a relevant recent paper by John Maier in PPR where he tries to work a lot about the nature of agency out of this notion of options. I’ll be honest, I’m going to be deflationary here about control over options or any similar notion, and say that the account of control over action (plus some bells and whistles for control over choice) is all we’re able to get. (So, in the case of having control with respect to not having a cigarette, I want to say if there is a relevant motivational state, talk in terms of control with respect to that state, and if there is not, then there's no problem.) But I think that plenty of philosophers at least tacitly disagree with me on that point, and so in part I am trying to elicit more explicit thoughts to that effect with this post.

This is a nice topic. Agency itself is a broad topic. So there's a risk of getting lost when discussing it. I think of agency as the property of being an agent, and I think that what it is to be an agent is to be a being that acts. (I like to keep things simple when I can.) Let's assume that we're concerned only with agents that sometimes act intentionally. (I'll call them "intentional agents" to save space later.) Even then, if we start toward the bottom, we go quite a distance before we get to normal adult human agents. So when I see an expression like "ground-floor requirement on agency" or "essential to agency," I ask myself whether what we're talking about is a necessary condition for being an intentional agent or a necessary condition for being an intentional agent of a certain kind. I ask myself the same question when I see a sentence like "agency requires inhibitory control." A being that has no inhibitory control might act intentionally, but normal human beings (provided that they’re not too young for this) definitely have some inhibitory control. So before we get too far, I'd like to ask what range of agents we should be considering. For your purposes, Josh, is it ok to limit the range to agents who not only perform overt intentional actions but also sometimes think about what to do and make decisions or choices? Or, if you don’t want to limit the range of intentional agents under consideration, should the expressions I quoted perhaps be modified along the following lines: "ground-floor requirement on normal human agency," "essential to normal human agency," and "normal human agency includes inhibitory control"?

Hi Al,

Yes, I’m happy to limit the range of agents for present purposes (although my attempt to connect these issues to what libertarians and compatibilists might say blurred the lines here, I suppose). Even having done so, though, I think the question about choice comes up. Go as low as we can, and find a relatively simple intentional agent: does that thing make choices (where choices are intentional momentary mental actions of forming intentions about what to do)? Is that thing’s doing so essential to its being an intentional agent? If so, why?

Perhaps this deserves a further post, but I’ll mention it here, since life is short. I’m also interested in what happens when we scale this up to normal human agency. Now we’re no longer talking about what is conceptually required for agency, and I’m not sure there is anything like a ‘concept’ of normal human agency. But there’s still an interesting question here: how important is the capacity for intentionally deciding to human agency? What does it add, and why?

Back to the original thought. Regarding keeping things simple (which I also like to do, though I’m not very good at it): there’s a simple argument that goes from your view that an agent is a being that acts to the type-1] view I mention in the post: an agent is just a being that acts, actions of any type require (some degree of) control, some actions are not choices, so choices are not necessary for agency. I wonder if any readers disagree with that simple (also probably quite simplistic) argument.

Josh,
About the simple argument you sketched. . . Here’s a view someone might hold: Although some actions are neither choices nor chosen, nothing is an agent unless it sometimes chooses; being an agent (agency) depends on having made or making at least one choice.
Here’s a brief dialogue about this view.
Ann: Hydrogen chloride is a chemical agent. Hydrogen chloride never makes choices. So some agents never make choices.
Rudolph: That’s not the kind of agent I’m talking about. I’m talking about animate agents.
Ann: Ants are agents. Ants never make choices (momentary mental actions of intention formation). So some agents never make choices.
Rudolph: I meant to say that I’m talking about agents that sometimes act intentionally. I think that acting intentionally requires intending things, and I don’t believe ants intend things.
Ann: OK. We can look about the animal kingdom for agents that act intentionally but never make choices. Hey, what about normal young children of a certain age. Might they get to the point where they sometimes act intentionally before they get to the point where they make choices? My friend Al says that choices or decisions are always responses to uncertainty. Might young children act intentionally before they ever resolve uncertainty about what to do by making a choice?

Al,

Right. Well-put. (Apparently once one starts writing dialogues, it’s difficult to stop. So I’m scared to start.)

I want to say that some poor thing that only acquired intentions, and was never uncertain about anything (but one could be uncertain even if at the moment of one always acquired intentions in a nonfictional way, right?), would be an agent. After all, if some actions are neither choices nor chosen, what's so special about the ones that are? But the more I think about this the less I'm convinced about that.

Here's one reason why. Suppose we put aside momentary mental actions of intention formation for a second, and just think about the absence of uncertainty. A part of me reacts like this: this thing is never uncertain? It never has to pause for a bit when confronted with more than one action-option? Does that mean it is never genuinely confronted with more than one action-option at a time? What kind of mental life does this thing have? This thing can’t be an agent. A clunky reactor to the world, maybe. Or, if not clunky – if the absence of uncertainty isn’t a functional deficit – then a scary cyborg reactor to the world. But not an agent.

Now, that’s just a reaction, and I suppose it suggests that I regard uncertainty and the kind of deliberation that uncertainty prompts as a pretty natural way of being in the world. Maybe this is also an expression of an intuition that people who talk in choice-first ways share. Another simple argument: if a being is never uncertain about her behavior-options, then that being’s behaviors are not relevantly different from reflexes. But reflexes are not actions. So the never-uncertain being is not an agent after all.

Without my endorsing or not endorsing that argument, it strikes me that the task of elucidating a relevant difference between reflexes and the never-uncertain being’s behaviors is a non-trivial one.

Hi Josh,

This may be a naive question, as this discussion is WAY outside my research area, but I'm going to ask it anyway. Does your account of the foundations of agency allow for us to talk sensibly about group or collective agency?

My worry here is that if control over action is foundational for agency, and this is understood in terms of a motivational mental state with causal powers, it starts to sound like group agency is a completely different sort of thing from individual agency.

Maybe we can resolve this by playing "light and loose" with some of our intentionality talk, or we can talk about some relevant sense in which groups are minded entities. As far as I know, there are people that work on this topic who adopt one or both of these strategies when talking about group agency. Alternatively, maybe group agency really is a different sort of thing from individual agency, and your account helpfully explains why.

My point here isn't to defend or reject any of those options. Rather, it's to try to respond to your "Have I missed anything?" question. I think someone might say "Yes you've missed something--agency need not be limited to individuals. Groups can also be agents, and what makes them agents isn't that they exercise the sort of control you're talking about, but rather that they make choices. What you've described can't be the foundation of agency," such a person might conclude,"because it only allows for individual agency."

Hi Eli,

The truth is I don’t really know how best to think of group agency. But this is a blog, so that’s not going to stop me from speculating irresponsibly.

Think about the San Antonio Spurs. They engage in group actions – executing a play, trapping a ballhandler, etc. It’s not clear they need to make choices to be the group agent they are. (Generally, group agents that make choices seem to have a kind of formal structure set up that allows for group deliberation as well as a way for the decision to be made via a kind of performative action – think of the Supreme Court.) So, I would deny that group choices are essential for group agency.

Now, can the account of control I lay out be extended to even some group actions? Probably – so long as a group action is implemented according to a plan, then we could think of control with respect to that plan as control in service of a motivational state. I don’t mind playing fast and loose with intentionality talk in that way, although it would be nice to have an account of the nature of the kind of plan I just mentioned (interlocking mental states of individual agents?).

I should say that I don’t offer an account of the foundations of agency in that Phil Studies paper, or anywhere else. I would like to think of the paper as contributing to such an account, but that might just be wishful thinking (thus this blog post). I think group agency is interesting in this connection. Is group agency derivative in some way from individual agency, or not? I’d welcome pointers from anyone regarding work that addresses that issue. (A quick philpapers search turns up a number of potentially but not obviously relevant papers.)

Hi Josh (if I may)--

I have some sympathy for Al (the other much more famous Al since some here call me that too) on this.

I gather that you're on board with Niel's latest book on consciousness as a necessary condition for responsibility, since that would differentiate ants from aunts on that matter. Someone (Watson?) in the literature many years ago stated that a spider seems to try and catch a fly (constructing a web and attacking an entrapped fly--'help me!!!'--yet another Al (David) Hedison seemed to vocalize a fly's sense of victimization of intention at the end of the 50's film!). But presumably spiders (and ants) lack consciousness even if they satisfy some minimal condition of intentionality by some behavioral standard.

So I'm partial to Al's (not Hedison) point--conscious choice seems relevant, even if as a necessary condition the nature of that choice is not metaphysically transparent. And it may well be that mental "choices" are as luck/caused tainted as is any spider-intention. But then Spiderman's uncle--married to his aunt!--has a great insight: with great power comes great responsibility (or: responsibility is proportional to a power spectrum leveraged against human-centered value-judgments of the nature of that spectrum). He did not emphasize any role of free will choice in that dying bit of advice. Agents are such as evaluated by some specifically human standard of agency, and what constitutes a relevant power of agency. What is relevant can only be determined by some stipulation of what's valuable. We need first to determine what values make most sense with respect to what we empirically know about human behavior. All else is a priori meandering.

Hi Al (Alan?),

Nice comment! I really like Neil’s book, although my thoughts about Neil’s position there are difficult to state quickly (I’m going to talk about consciousness and FW in later posts). Just to make sure I understand your position, would you agree to these claims:

1] Low-level non-choosy beings can be agents.
2] Whether 1] is true or not is not so relevant to how we think of human agency.
3] Conscious choice is critical, in some sense of critical, for human agency.
4] Determining what is relevant for evaluating whether something is an agent, or what is relevant for evaluating the degree of the ‘power of agency’ some being possesses, depends on the specification of what is valuable about agency.

I’d particularly like to hear your thoughts on 3] and 4]. I might agree with 3], but I doubt I’d agree with 4]. But I’m not sure what you mean by valuable here. Could you say more?

Josh--call me Alan; avoids the real Al's being unduly smeared.

Thanks for your comment, which shows that you are a better reader than I am writer, because I do accept some version of 1)-4). Let me defend 4).

I hope I may be forgiven for some repetition here for using an example I've harped about on Flickers before. But some comments on it have encouraged me.

Many key concepts in FW/action theory--FW itself, MR agency, responsibility, and more--seem to me to be like the concept of death. On the one hand, barring skepticism, the terms appear to refer to some states of affairs in the world. In some general sense it seems indisputable that people do die, and that people sometimes make choices. But exactly what is going on (if anything) is partly determined by a value judgment (or a set of such). Concepts of death--cardio-pulmonary, whole- or higher brain, cessation of all biological processes and so on--refer to things in the world depending on their assessed importance. That seems no less true to me about matters of agency, FW, and responsibility. Classic compatibilists only care about free action, while incompatibilists think that's not deep enough, requiring more metaphysical detail about choices and determinism. But what is properly and specifically referred to depends on what's valuable to whom. Think of the recent case of that poor girl who went into coma after simple surgery. Her immediate physicians wished to use whole brain-death criteria to pronounce her dead, but the family wouldn't hear of it, and finally appealed to private facilities to take their daughter and continue to treat her as if she were alive, which in their minds, because she still had a heartbeat and breathed, she was. Dying is a process, quick or not, and in some more lengthy interim periods disagreements obtain about what's really going on. But those disagreements about death in these cases are not about objective data--it's about what that data means in terms of valuation. The values determine successful or unsuccessful reference of terms like life and death. Likewise--I'd say--in terms of whether one in a given case acts of one's own FW, or whether one is responsible for a given act. The objective facts as we can collectively assess them are vastly underdetermined in terms of what they mean when we use value-laden term--like FW, or agency, or death--to refer to those facts, or aspects of those facts.

Well that's enough for now. My basic view is that no amount of what "is" can nail down what "oughts" apply. I've long thought and argued that world-views determine values, and values determine proper reference of value-laden terms, and differences of values lead to disagreements about proper reference. Others might argue that FW, agency, etc. are not value-laden or value-dependent concepts, but I just can't see any convincing argument to that effect.

Thanks for listening, and listening very carefully! I should also say, as I've said before, reading and teaching Robert Veatch's views on death have greatly influenced my thinking here all around.

I’m thinking that “agency” is a difficult term to define. My (humble) opinion is that every living thing is an agent. Since every agent has different characteristics, it’s difficult to talk broadly about agents. In order to define an agent precisely, we’d need to somehow identify what exactly is exerting the control, and then figure out how to qualify that – and that’s a daunting task. Happy 4th!

Alan,

I think you are thinking of Harry Frankfurt's paper, "The Problem of Action," in which Frankfurt talked about the purposeful behavior of spiders.

Hi John--

It bugged me (sorry) that I couldn't toss off the reference so I dug back into my books. Though I have read the Frankfurt piece, which talks about the movements of spiders from internal and external sources, my specific recollection was about a spider trying to get a fly. And indeed it is Watson, in his 1987 Mind "state of the art" overview article, "Free Action and Free Will", which I read when first published, and has had quite an influence on my thinking about FW/MR. There he says: "But we do think of spiders as *trying* to get a fly. And where we think that a serious notion of action is applicable, so is that of trying." (Reprinted in Agency and Answerability, 194.) Thanks for the additional tie-in.

Hi Alan,

That’s an interesting view, and I like the analogy with death. I might not really disagree with you, since it looks like we can think of agency like this:

Agency*: Any being/system with the behavioral (and perhaps internal structural) sophistication to warrant the attribution of mental states and of behavior that conforms to the content of those states.

Agency^: Any being/system with the behavioral (and perhaps internal structural) sophistication, and the right kind of connection to what we value, to warrant the attribution of moral responsibility.

That’s rough, obviously. I just mean to say that we can think about whether a thing has a mind, and whether a thing’s mind is implicated in its behavior in some way, without worrying about the moral status of that thing’s behavior. That’s the kind of ‘ground floor’ agency I was initially after in the original post. Maybe ground floor is a misleading term, and what we’re dealing with here are just different kinds of agency (although it seems to me that an agent^ will need to be an agent*, but not vice versa).

Does Agency^ get at your notion, or am I misunderstanding?

Hi James,

The above might also address your idea. I take it that not every living thing meets Agency^ - so not every living thing is an agent in that sense. Yes or no?

Josh, your Agency^ captures my idea perhaps even better than I'd conceived of it. Thanks.

I would add that one reason I like Agency^ so much is that your phrase "the right kind of connection to what we value" might lend itself to a pragmatic interpretation of justifying that connection.

Again, very astute mind-reading of my intentions!

Alan,

Great--thanks for the scholarly work! (and I liked the joke...)

Hi Josh, thanks for the great post! What follows is a response to the question that you raised in your original remarks above, i.e., does your account of control leave something out?

First, I agree that control is a matter of causation, and I agree that an agent’s control over her bodily actions is not fundamentally distinct in kind from her control over her mental actions, and I share your aversion to ‘choice-first’ approaches to the foundations of agency, but I worry that you have, indeed, left something out.

As I interpret your remarks above, the ‘ground floor’ level of control is the control that the agent has over the onset of the relevant motivational mental state, which when all goes well non-deviantly causes the onset of the relevant behaviour. In the case of overt bodily action in particular, this suggests that, for instance, when I am moving my legs as I am walking to the café to buy some pastry, the movement of my legs is causally controlled not by me or by something that I am doing, but by the onset of the relevant motivational mental state. That is, the presence of my relevant motivational mental state is causally controlling my body as it moves its way to the café, so that in order to control the movements of my body, I must first control the relevant motivational mental state.

Supposing that I’ve understood what you’ve said, here’s my worry: the account of control incorrectly describes the causal relation between the agent and her own bodily movements, at least during her performance of an overt bodily action.

To see why, consider how you would go about moving your right arm while lifting a mug of coffee off the top of your desk so as to sip its contents. Imagine that you’ve got the handle of the mug firmly grasped in your right hand, your right arm is nearly fully extended and ready to move, nothing seems to be out of whack, and then what? To control the movement of your arm you first control some motivational mental state, which when all goes well non-deviantly causally initiates the motion of your arm? That doesn’t seem like the right description of what’s going on, even in such relatively mundane scenarios. What seems right is that, in order to control the overt movement of your right arm, you must exert some degree of effort, if only to set in motion the relevant muscles and joints in your arm as you lift the mug off the surface of your desk, thereby overcoming gravity and other relevant forces.

Thus, it seems that at least in the case of overt bodily action, the ‘ground floor’ of agency is even lower than you suggest: it is neither choice nor control, as you articulate the latter notion here, viz., as control over some motivational mental state. Rather, the ‘ground floor’ of agency is the exertion of effort through which you control the over movements of your body.

Cheers, and Happy 4th!

Hi Michael,

Thanks for the interesting comment! Four thoughts in reply:

1] I wouldn’t say ‘control over the onset of the mental state’ – I would say control (the kind I discuss in that paper) is largely a matter of how the mental state gets implemented in behavior. Maybe I’m reacting largely here to ‘onset,’ since in my view an intention isn’t extinguish at the moment of action initiation but continues to play important causal roles throughout an action’s execution.

2] I don’t have a good grip on what it means to control the relevant motivational mental state. I’m thinking of control as something an agent does with respect to a given motivational mental state.

3] At the end of your comment you suggest what we might call a trying-first account. I think that’s interesting, and it raises the question of how we should understand the nature of trying. One way – Fred Adams and Al Mele spell this out in a 1992 paper ‘The intention/volition debate’ – is to think of tryings as identical with the effects of a proximal intention’s normal functioning (initiating, sustaining, guiding action). I’m pretty happy with this way, although I might fill in what sustaining and guiding comes to a bit more, in terms of a range of agentive capacities to direct attention, inhibit competing plans, etc. But on this understanding, you can see that the general way of understanding agency I’ve proposed is relatively untouched. Of course trying is important, but to try is, necessarily, to exercise some degree of control with respect to some motivational mental state. There are other ways of understanding trying, of course. But you have to be careful, in my view, since I think our understanding of something functional like trying is constrained by our best neuropsychological models of action. Many of the proposals that try to think of volition as something sui generis or agent-causal don’t seem to mesh with the science too well. But I figure you have some thoughts about this!

4] A further issue your comment raises – one of my favorite issues, really – concerns agentive experience. You talk about how it normally seems to us, and you say the exertion of effort plays a big role in how we control (or at least initiate) an action. In my view it is a difficult thing to spell out how the experience of exerting effort relates to the mechanisms or processes that undergird action control. I have a view here – I wish it were already published, but it is under review! – and I’m happy to talk about it to some extent (it gets a bit too hairy for blogs). (I do say some quasi-related things in a paper ‘Conscious control over action,’ which is up at philpapers.) But I’d rather hear what you think about this issue. Any thoughts?

Thanks for the great reply, Josh! I’ll say a little something about each of your remarks:

(1a) First, a quick terminological note: I used the word ‘onset’ because I had assumed that your conception of control is offered within a theoretical framework in which the relevant causal relata are events. Given that a ‘state’ of mind is not obviously an event, I framed the issue in terms of the ‘onset’ of such a state, which would seem to be an event.

(1b) Terminological issues aside, and sticking with the case of overt bodily action, I agree that I control *how* I implement a given motivational mental state in my behaviour, i.e., in my bodily movements. But, how do I control that behaviour? This is where I think the exertion of effort plays a crucial causal role. I control the way in which my given motivational mental state is implemented in my behaviour by controlling the movements of my body, and I control the movements of my body in part by exerting the kind of effort required to move it through my immediate spatial environment. After all, in these sorts of cases, my body does not move by itself. Right?

(2) It seems to me that controlling the relevant motivational mental state (the act of making a decision, say) is a matter of performing a mental action. Perhaps you’ll discuss such issues in a future post?

(3a) As I understand it, the notion of effort is distinct from the notion of trying, especially as you outline it here. The exertion of effort is most clearly recognized by considering circumstances in which you *struggle* to perform an action with your body. When you are lifting a heavy object, for instance, you are using the relevant muscles and joints in your body to overcome the causal effect of gravity and other forces, and it is in cases like this that your exertion of effort is most salient, and its causal powers most clearly manifest. Trying, as you outline it here, is a very different kind of phenomenon.

(3b) Indeed, I have a few thoughts about whether current best practices in the neurosciences of action *should* constrain philosophical reflection on action-theoretic issues. Very briefly put, I think that current best practices in the neurosciences of action should be constrained by philosophical reflection on action-theoretic issues, not vice versa. So, for instance, if philosophical reflection suggests that there is good reason to think that the exertion of effort plays a crucial role in the causal genesis of action, then our current best practices in the relevant areas of neuroscience had better incorporate that into their explanatory and predictive frameworks. This thought is related to the last topic that you mentioned, agentive experience – which I, too, would count among my favourite action-theoretic issues.

(4) About agentive experience: my interest in agentive experience is in getting clear on the myriad forms that such experience seems to take, and then modifying the scientific and philosophical conception(s) of the underlying mechanisms or processes in the light thereof. But it seems you think that we should proceed in another way?

Hi Michael,

Great stuff. Some thoughts about your thoughts.

Your 1a]: Yeah, that’s the normal way to think of these things. But I think some supplementation is necessary to account for the sustaining and guiding functions of an intention (or even of whatever mental capacities – attention, inhibition, etc. – are important here). This could still fit within an event-causal framework, it’s just that intentions will not persist through time in any kind of simplistic way.

Your 1b]: Right, at least not always – I agree that sometimes there is an experience of trying or exerting effort to move the body in various ways.

Your 2]: I think mental actions are important for control over bodily action, but I wouldn’t want to make mental action necessary. We sometimes acquire mental states in a non-actional way, and some exercises of control seem to me to be largely a case of this.

Your 3a]: On the difference between an experience of trying and an experience of effort. Maybe. We might disagree here. These kinds of struggle might be thought of as proprioceptive experience, or as non-sensory (i.e., sui generis) agentive experiences. I suppose I’d locate the experience of bodily effort on the proprioceptive end. I do think there is an experience of mental effort, and giving a good characterization of this is difficult (and not blog-friendly). Maybe we’d ultimately agree about mental effort, although my thoughts on this are not fully settled.

Your 3b]: I wouldn’t counsel any kind of unquestioning adoption of what a scientist says. Data constrains philosophical theorizing here, for me, just in the sense that our theories should try to explain the data we have, and if the best explanations require rejection of some view that introspection favors, we have to take the possibility of rejecting that view very seriously.

Your 4]: I think I agree that this is the way to proceed. I might just emphasize more of a back and forth between our understanding of the phenomenology and our theories of the mechanisms that subserve the phenomenology. I think it is sometimes the case that we can be deceived about bits of our phenomenology, or at least attribute elements to it that aren’t really there. Sometimes good science can help clear things up for us regarding our phenomenology. However, much more attention needs to be paid to our phenomenology than is often the case in the agentive experience literature, and I think much more creative and illuminating science could be done if scientists would be willing to pay such attention.

The comments to this entry are closed.

Books about Agency


3QD Prize 2014: Marcus Arvan