Blog Coordinator

« Nahmias on Free Will at Scientific American | Main | Reminder: NOWAR CFP »

12/30/2014

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Complexity does not change the underlying laws of nature. Though we are understandably awed, and often pleasantly surprised, by the things homosapien brains can cook up, there is no meaningful difference between it and any other mechanistic system. Input-Output. That's it. Haven't we debunked this sort of anthropocentrism?

Thanks for this; I look forward to your response on the previous thread. But I think Thomas has me down for February--otherwise I'd better get writing sooner than I expected!

What Brent said.

Oh, and I'm on next month, or so I think.

Brent, it all depends on what you mean by 'meaningful' difference. Is there no meaningful (real) difference between a yellow flower and a purple flower, even though the difference in color can be explained in terms of low-level organization of parts or mechanisms? No meaningful difference between this blog and Leiter Reports, even though the information on the blog is composed of 1s and 0s and mechanistic relations between them? No meaningful difference between my understanding of free will and yours, even though both are "just" input-output relations in our mechanistic brains? Reductionism and mechanism are only bad words if we let them be...

Meanwhile, Peter, I've loved your posts, especially these last ones on imagination and free will, an under-explored connection. I have a chapter coming out in the Oxford Handbook on Freedom (ed. by David Schmidtz) that discusses the relationship between imagination and free will. And Chandra Sripada is also working on this issue. Oisin Deery has a paper coming out (I think in Phil Studies) on similar issues. Sorry I haven't been posting more comments--the holidays and all--but I've been reading and learning. Thank you for a great month.

Peter, I'll add my thanks here too for all your energetic and interesting posts this month. I remain unconvinced, however, that libertarian freedom adds to the control and responsibility which exist for deterministic agents. True, harnessing randomness might add to the range of imagined possibilities for a choice, but that harnessing is a deterministic function of the extent it had selective advantage for us: “High variability affords creativity and exploration, but comes at the cost of efficiency and speed (Brembs, 2011).”

Whenever a choice or action is justly attributable to an agent, you'll find a secure causal linkage between character, motive and the choice. None of the attributions of agency you bring up elsewhere turn on randomness: “Agency is involved in invoking imagination...in evaluating what comes out of the bag of tricks... in rejecting a possibility or endorsing it as ours, and...in implementing an imagined possibility.” Rather, all these actions involve the deterministic application of selection criteria. Although chance inserted into otherwise deterministic processes might in some cases confer behavioral advantages by increasing the number and type of options considered, it can't increase your *responsibility* for the selection among them, which is what libertarians think is necessary to make us *really* responsible and deserving.

You say: "And even if we do not cultivate one of the best selves that lay within our innate potential, that was a choice too, because we could have done otherwise and therefore played a role in our not turning out otherwise."

I take it you mean that, given the exact conditions in an actual situation, a different choice could have arisen, where that choice was unambiguously the agent's, not a function of chance. But chance is the only thing that could make it the case that the exact conditions produced a different choice, whether of self-formation or setting future parameters for later choice options. The causal regress has been broken, but the alternative that could have resulted would not have been determined by the agent's reasons, character, or any other stable characteristic by which we ordinarily individuate persons. So I don't see how indeterminism makes us more responsible and deserving, for instance of retributive punishment, which is why, apart from getting the facts straight about human agency, all this matters.

Eddy, in the grand scheme of things, homo sapien brains and chimpanzee brains are remarkably similar. It seems that the onus should be on compatibilists to explain just why the homo sapien brain is able to be held MR while the other is not.

And if "second order desires", "true selves", or other similarly hazy psychological mumbo jumbo are the proposed differences, I remain unimpressed.

It really is a warning sign of many compatibilist accounts that they often operate on, and employ the terminology of, pet psychological models. On what grounds do compatibilists feel justified in throwing around different order desires, true selves or "character". These are notoriously fuzzy concepts to begin with.

Brent

“homo sapien brains and chimpanzee brains are remarkably similar.”

Not sure what you mean here exactly. One could argue that they are remarkably different. Chimps lack language, lack theory of mind, appear to lack abstraction. Read the review

Darwin's mistake: explaining the discontinuity between human and nonhuman minds (2008). Penn DC, Holyoak KJ, Povinelli DJ. Behav Brain Sci. 31(2):109-30

And you will be amazed at just how vastly different our brains, minds and behavior are.

I don’t think the idea of "second order desires" is “hazy psychological mumbo.” The idea of hot (emotional, desire-driven) and cool (rational, deliberative) systems came out of Psychology long before it was adopted by Frankfurt and renamed 'first- and second-order desire' by him. A good review of the history of the ideas from Psychology are here:

http://www.wisebrain.org/media/Papers/MetcalfeMischel1999.pdf

And neuroscience has generally backed up that there are neural circuits subserving emotions (limbic, amygdala), reward (ventral tegmentum, nucleus accumbens), pain (the pain matrix), various desires, and then primarily prefrontal circuits involved in regulating the former set.

Thanks for your kind words Eddy. Please send me your Sci. Am. piece and the chapter you mentioned when you can. The internal mental workspace that we have is truly remarkable. My group has been trying to figure how it works. It seems that representations and operations over representations involve analog body-like operations, like mental rotation that takes longer the more you have to rotate a shape. It seems there is no limit to what can be imagined in this internal virtual reality. We can imagine any shape we want, for example, say a man with a lion's head, and then go carve it, making it real and causal in the world. I became interested in looking in the fossil and artifact record to see when the earliest such chimeras became apparent, since this would be an indication of when our brains changed to imagine such things. Here is an example from 13k years ago:

http://en.wikipedia.org/wiki/The_Sorcerer_%28cave_art%29

and this one is from 32k years ago, from what is now southern Germany:

http://archaeologynewsnetwork.blogspot.com/2011/04/missing-parts-of-32000-years-old.html#.VKSdz8kfjJU

Brent

“Complexity does not change the underlying laws of nature. Though we are understandably awed, and often pleasantly surprised, by the things homosapien brains can cook up, there is no meaningful difference between it and any other mechanistic system. Input-Output. That's it. Haven't we debunked this sort of anthropocentrism?”

A meat grinder is an input-output system, as is a factory, as is computer. But they are radically different in the operations they carry out and over the kinds of things they operate on. Moreover, it would be a mistake to think the brain functioned like any of these systems. For example, the brain is manifestly not at all like a computer. There is no software/hardware distinction in the brain, there is no consciousness in a computer, and computers do not rewire themselves daily, let alone on the millisecond timescale of the brain. To say there is no meaningful difference between input-output systems means that you are ignoring all the differences between them. A brain is unlike any known object in the world, so there will be no good metaphors to help us deeply understand it. The metaphor of input-output machine that seems to govern your thinking is a bad metaphor. Be careful of misleading metaphors. They are sometimes useful in that they can help us understand something new in terms of something we already understand. But they can also lead us astray if we don't notice all the ways they fail. If someone says a neuron functions just like a toilet, because there is a threshold, a point of no return, and a long pipe down which things are sent, we would want to correct their metaphorical overreach. Obviously thinking of a neuron as a little input-output toilet, and say, 'see, no difference!' would lead us to have some pretty serious misconceptions about neurons.

Finally, 99% of the operations in a sensory input area like V1 are internal to V1, and not driven by the input. A more accurate view of perceptual processing would be that it is constrained by input, not that it is simply a transformation of detected input. The information is not simply received and transformed. Information is added and constructed on the basis of many assumptions and Gestalt or mid-level operations. Perception is more like veridical hallucination than anything a simple input-output system does. Moreover, there are many cases, like dreaming, hallucinations etc. where there is no input whatsoever. Think of your conscious experience as veridical hallucination rather than input that is detected by something like a camera.

Not sure what you mean by “anthropocentrism.” It is an empirical question how different our brains/minds are from those of other animals. The evidence is overwhelming that our brains are fundamentally different from those of a chimp or dog or any other animal. Of course, we also have a great deal in common with the neural processing of other animals. For example, a chimp may well have visual experience that is similar to ours. But needless to say, no chimp brain would ever have come up with an airplane or a work of art or a piece of music or even a true sentence.

Tom,
You raised many points, and I want to answer them in detail. I appreciate your enduring skepticism Tom. It helps clarify what is at stake. From my point of view, you are still conceiving of decisions as made at an instant, and not yet conceiving of decisions as cybernetic durational processes. This difference makes all the difference for MR, as I hope to explain below.

“I remain unconvinced, however, that libertarian freedom adds to the control and responsibility which exist for deterministic agents”

As you know, I do not believe that we are deterministic agents. Being an incompatibilist about FW and MR based upon the exclusion argument that I discussed a few posts back, I would argue that a deterministic agent has zero agent-level control for his choices and actions and zero agent-level responsibility for the consequences of his choices and actions because SUFFICIENT causes of his actions and thoughts existed before he was even born. How can someone be held responsible for outcomes that were sufficiently caused 14 billion years ago at the time of the big bang? Given that there is zero MR and zero strong FW if determinism is true, assuming the exclusion argument holds, “libertarian freedom adds to the control and responsibility which exist for deterministic agents” even if it adds any freedom or responsibility at all. I know you will disagree that MR and FW are ruled out by the exclusion argument, but if you think that, please explain to me how you get around that argument. If you have not read my post on that argument, please do and let me know where the logic fails, and how compatibilism can be saved.

We agree that “harnessing randomness might add to the range of imagined possibilities for a choice.” But we disagree “that harnessing is [solely] a deterministic function of the extent it had selective advantage for us.” And we disagree about what you wrote here:

“Whenever a choice or action is justly attributable to an agent, you'll find a secure causal linkage between character, motive and the choice. None of the attributions of agency you bring up elsewhere turn on randomness: “Agency is involved in invoking imagination...in evaluating what comes out of the bag of tricks... in rejecting a possibility or endorsing it as ours, and...in implementing an imagined possibility.” Rather, all these actions involve the deterministic application of selection criteria.”

First, this is where I think you are right when thinking of choice as happening at an instant: It is true that when some possibility is evaluated by existing criteria, it either passes the threshold for a match or adequate fulfillment of criteria or it does not. So if we think of a decision as solely made at that instant, as you seem to, when the threshold is crossed, then the harnessing of randomness is a deterministic function of the extent to which it met the criteria that were preset by preceding information processing. This is true at the level of single neurons, in the sense that they either fire or not, depending on whether potential at the axon hillock surpasses the threshold for firing or not. This is also true at the level of harnessing randomness at the higher level of human memory recall. If I am trying to think of some animal that can fly, a bat might pass the threshold first, or a dragonfly might, but the criterion ‘is an animal that can fly’ will either be met or not. Your point is also true at the much higher information processing level of harnessing randomness when human imagination invoked to generate possibilities for consideration. Say a criterion has been set that ‘I need to conceive of some new chimera that no one will likely have thought of before.’ If the first thing that comes along (through the random recombination of high-level conceptual representations that underlies imaginative generation of possibilities constrained by preset specs) is an image or thought of a bat with a dragonfly’s head, the threshold will be met and I will probably go with that, unless it fails to meet other specs that had not constrained possibility generation. Say I realize that the people I am trying to imagine a new chimera for are afraid of insects. I will nix the dragonfly head idea. Then new specs can lead to the generation of a new chimera that does not involve insects.

But again, and this is where I think you are not right, what is missing from your decision-at-a-threshold-crossing-at-an-instant picture of decision-making is the cybernetic and durational aspect of decision-making. A decision does not only involve the reflexive passing of a threshold, and then a ballistic commitment to that outcome, because once something has passed threshold, it gives rise to a cybernetic comparison between the present state and the goal. If the error is too large, the decision will be rejected, specs constraining future, in part random, possibility generation can be changed, and the whole imaginative cycle of possibility generation and evaluation will begin again. Next time the chimera imagined might be a bat with the head of an iguana. And if that is rejected because of an error signal relative to highest level goal knowledge and knowledge that contextualizes the appropriateness of an outcome, the process can begin again. Say that iguanas are considered taboo in the religion of one of the children in the classroom that we are trying to think of a chimera for, because of, say, a school art project on chimeras. Then we can reject that, generate new possibilities, under new constraints, and so forth. This can happen in an open-ended way that can eventually evolve toward a good solution.

This is analogous to the evolutionary process imagined by Darwin, where multiple possibilities can be generated (phenotypes that differ because of amplified microscopic randomness in the form of mutations) and a second stage selects from among these possibilities, which then seed the next generation and so on. This process affords creativity without a creator, in that animals will come to be optimally or at least adequately adapted to their niche. In the case of the cybernetic cycle I am discussing, randomness generates various ideas, which are evaluated on the basis of criteria set by the agent. The agent in effect allows ideas to evolve toward an adequate solution to the problem at hand, say, imagining a chimera.

“Although chance inserted into otherwise deterministic processes might in some cases confer behavioral advantages by increasing the number and type of options considered, it can't increase your *responsibility* for the selection among them, which is what libertarians think is necessary to make us *really* responsible and deserving.”

Let’s assume that you are right and I am wrong that there can be MR even under determinism. How does the cybernetic conception of decision-making that I am advocating increase responsibility for the selection process? There is constant evaluation, correction and in the end endorsement of a solution. There is enhanced responsibility because a choice is not made once, at an instant, as under your account, but made many times as an evolving choice is again and again compared to criteria at multiple levels. The highest level comparison occurs at the level of consciousness, where potentially unconsciously generated possibilities are evaluated in light of everything a person knows and stands for. There are multiple opportunities to reject a bad option, and there are multiple opportunities to stop the process, or reject an option, or to fine tune the specs for generating future possibilities. Once a solution has been optimized under this cybernetic process of deliberation, the agent commits to it and endorses it as adequately representative of what he wants or intends. Under the deterministic instantaneous picture of decision-making you seem to advocate, none of this occurs. A decision under your picture is a sort of reflex that happens once a threshold is passed. Under your picture, there are no subsequent stages that allow the decider to back out, veto, or modify his or her decision, or the criteria under which they are made. So under your picture, I would say there is less responsibility just as I would have less responsibility for a reflex action (say kicking someone because someone hit my knee triggering that reflex) than there would be for a considered action that follows long deliberation of pros and cons as well as consideration of the consequences of kicking someone.

You say: "And even if we do not cultivate one of the best selves that lay within our innate potential, that was a choice too, because we could have done otherwise and therefore played a role in our not turning out otherwise." I take it you mean that, given the exact conditions in an actual situation, a different choice could have arisen, where that choice was unambiguously the agent's, not a function of chance. But chance is the only thing that could make it the case that the exact conditions produced a different choice, whether of self-formation or setting future parameters for later choice options.”

Again Tom, you are conceiving of a decision, in this case a self-forming decision, as happening at an instant. If that were the case, then assuming an agent is identical and has an identical history until the instant of choice, the only difference that can lead to choosing this versus that self-forming action would be chance. But a self-forming decision is not something that happens at an instant. It evolves cybernetically toward a perhaps shifting goal over perhaps long durations. The agent has many opportunities to correct his criteria, generate new options, and endorse or reject options. There is more responsibility here because there is more responsibility for making a thousand decisions along the way that one endorses as one’s own than there is for making a single reflexive decision at an instant just by chance.

“The causal regress has been broken, but the alternative that could have resulted would not have been determined by the agent's reasons, character, or any other stable characteristic by which we ordinarily individuate persons. So I don't see how indeterminism makes us more responsible and deserving, for instance of retributive punishment, which is why, apart from getting the facts straight about human agency, all this matters.”

I think this is wrong because you are ignoring the multiple comparisons, chances for changing one’s mind, changing the grounds of future decisions, and ultimate endorsement of a decision as one’s own, fully reflective of one’s commitments.

About retributive punishment, that is an entirely different kettle of fish. I can imagine that LFW accounts could be developed that supported or rejected retributive punishment. Personally I think retribution is not a sound reason for punishment. Removing criminals from society to prevent harm to others, reforming them through character reforming training, and providing means for achieving penance are all reasons for punishment that have nothing to do with revenge that a libertarian could endorse.

Finally, of course, a central difference between a deterministic and libertarian account is that decisions could really have turned out differently than they did under a libertarian account but not under a compatibilist account.


Thanks Peter for your detailed reply here, and again for your indefatigable posting and follow-up overall!

I don’t in the least dispute the existence of a durational, multi-step, recursive decision-making process with all the bells and whistles that make us such sophisticated choosers. I’m only questioning whether randomness, should it play a role in that process, adds to what a deterministic process accomplishes in a way that makes the agent more in control and more responsible. In your descriptions of that process, you don’t advert to randomness at all – it all sounds pretty algorithmic, albeit fiendishly complex. A deterministic agent would gain more (local, deliberative) control to the extent it instantiates the processes you describe, instead of acting on reflex.

You say that if there are sufficient causes for a current decision contained in a causal regress that stretches back to the Big Bang, then the agent has zero control and responsibility. This is to say that local deterministic control doesn’t exist, or if it does, it doesn’t count toward responsibility. But your account appeals to that kind of control all the time, for instance in making selections between options (either at an instant or reiteratively) or in setting criteria for later selections. It’s only agent-level control because chance *doesn’t* play a significant role. So I don’t see how chance make us *really* in control, or *really* responsible, either in the long or short term.

You ask “How can someone be held responsible for outcomes that were sufficiently caused 14 billion years ago at the time of the big bang?” Well, we can and must hold each other responsible as a means of guiding goodness and reinforcing moral norms, even if it turns out we’re not *ultimately* responsible, as on the compatibilist view. That all this might be deterministic doesn’t undercut its value or necessity for us. Nihilism doesn't follow from determinism, or lack of LFW.

As for the exclusion argument, I’ll look again at your post on that. My psycho-physical parallelism take on mental causation puts the worry about epiphenomenalism to rest since I see the phenomenal and physical as causally non-interacting domains, each with their own true (enough) causal story. See the first section of my chapter in Gregg Caruso’s book, http://www.naturalism.org/Experautonomy.htm#defuse

Lastly, I’m glad you don’t endorse retribution. As far as I can tell, the main reason LFW is worth wanting on your view is that things could have turned out differently, not that agents bear ultimate responsibility for self-formation (an impossibility under determinism) which then makes them blameworthy and deserving of punishment for who they’ve become, as distinct from punishment for consequentialist reasons such as deterrence.

Tom, thanks for your continued skepticism. You are a worthy intellectual foil!

“You say that if there are sufficient causes for a current decision contained in a causal regress that stretches back to the Big Bang, then the agent has zero control and responsibility. This is to say that local deterministic control doesn’t exist, or if it does, it doesn’t count toward responsibility.”

Yes, I have argued, on the basis of the exclusion argument, that if determinism is the case there can be no downward causation which implies that there can be no downward mental causation, no )mental agent-level) free will and no (mental agent-level) moral responsibility. (Please read that post and explain to me why you think the exclusion argument does not exclude compatibilism). Assuming Kim’s argument is correct, then under determinism, an agent, meaning a causer at the level of mental events, cannot be in control or morally responsible. All events were sufficiently caused at the microscopic scale before an agent was even born, under determinism. If it can be shown that information can be downwardly causal if indeterminism is true, however, and certain other specific facts about the nervous system are also true, then mental events can be causal and agentic, and there can be FW and MR. So I think the introduction of randomness buys us the very possibility of FW and MR, and determinism rules it out. So my answer to your question:

“I’m only questioning whether randomness, should it play a role in that process, adds to what a deterministic process accomplishes in a way that makes the agent more in control and more responsible.”

is a resounding yes, insofar as indeterminism allows the agent to have FW and MR at all, and determinism rules it out.

“In your descriptions of that process, you don’t advert to randomness at all – it all sounds pretty algorithmic…. A deterministic agent would gain more (local, deliberative) control to the extent it instantiates the processes you describe, instead of acting on reflex. ”

This may be a bit of a tangent, but I must wholly disagree that I am talking about something that is algorithmic. By ‘a deterministic process’ I assume you mean an algorithmic process. What I am talking about, criterial decision-making, is not algorithmic, whether at the neuronal level or at the level of informational causation, in the imaginative-deliberative cybernetic cycle. In my book I argued that it might be tempting to think of a neuronal criterion as a physically realized “if-then” statement, namely, if such-and-such conditions are met by input, then fire. But the criterial if-then relationship realized in a neuron is fundamentally different from an if-then statement coded in software that is implemented on any present-day computer. To understand why neurons and computers are fundamentally different, we must bear in mind that modern computers truly are algorithmic, whereas the brain and neurons are not.
What defines an algorithm? An algorithm is a series of decisions, operations, or instructions that are carried out sequentially, one at a time; when one instruction is finished, the next step in the algorithm is executed, and its output goes to the next step, and so on deterministically. In a computer, this series is carried out in a sequential processor known as a central processing unit (CPU) or a core. A “thread” on a computer is one such linear sequence of decisions. Even multithread processing on multiple cores in a single computer, or on a computing grid of many computers, is algorithmic, because each thread is still a linear sequence of instructions executed one at a time. Multithread or grid processing might be parallel in the sense that subroutines run concurrently, but in fact each processor carries out sequential algorithmic operations. Despite the speed of CPUs, this algorithmic parallelism can at best mimic the nonalgorithmic parallelism found in the brain, where each neuron is essentially an independent but slow processor. Several attributes distinguish algorithmic processing, as done by single- or even multithread CPUs (or their abstraction, the algorithmic universal Turing machine) from nonalgorithmic processing, as done by neurons: (1) An algorithm handles one input at a time, whereas a neuron or other nonalgorithmic, parallel processors can handle multiple inputs at a time; (2) an algorithm sends output to a single subsequent step in the algorithm, whereas a neuron can send output to many other neuronal processors simultaneously; and (3) because an algorithmic step operates only on a single input, it is by definition not an instance of criterial causal processing, which, must be able to react to and equate multiple types of input. Whereas an if-then statement in a computer program is one step in a sequential algorithm that is not an instance of criterial causation, biological neural networks are not algorithmic and enact an instance of criterial causation. Whereas a step in an algorithm converts a single input to a single output, a parallel processor, such as a neuron, can be thought to transform a vector of inputs (weighted by synaptic strength) into a vector of outputs (action potentials to other neurons). Individual neurons do not carry out algorithmic operations sequentially, on one input at a time. Criterial causation realized among neurons involves the operation of criterial satisfaction and a threshold that determines to what extent the criteria have been met. The essence of neuronal processing may involve chained sequences of criterial pattern-matching on the perceptual side, and pattern execution on the motoric side—nothing like numerical or algorithmic computation. If a spatiotemporal pattern of input matches the criteria, the neuron will be driven above threshold and tell other very specific neurons that its criteria—namely, the pattern to which it responds—have been satisfied by sending them an action-potential-modulated signal. At no stage need anything be computed using an algorithm; there is nothing like software or a computer program, and nothing has been represented explicitly as a number. The brain may be as far from an algorithmic computational device as, say, the digestive system; just because a system processes input does not mean it is carrying out algorithmic computations.


“But your account appeals to that kind of control all the time, for instance in making selections between options (either at an instant or reiteratively) or in setting criteria for later selections. It’s only agent-level control because chance *doesn’t* play a significant role. So I don’t see how chance make us *really* in control, or *really* responsible, either in the long or short term. ”

Chance is central, on my view, in permitting the creativity, generativity and novelty of what we call ‘imagination’ in ordinary language to be truly creative, generative and novel in the sense of not sufficiently caused by past processing. Imagination involves chancy recombinations of high-level representations, or operations on representations from within the agent’s own brain/mind. The agent can then endorse an imagined outcome, which is endorseable as his or her own because it involves recombinations of the agent’s own high-level representations and operations. Chance plays an indispensable role in generating options de novo from the agent’s own repertoire of representations in memory and using the agent’s own mental operators or capabilities. Yes, agency is involved in endorsing an imagined option. But it is an option that was generated in a chancy way among parts of the agent’s own mind, so was the agent’s idea, not anyone else's and not some utterly random idea that had nothing to do with the agent's mind or character. So there is enhanced agent-level control here because chance *does* play a significant role in expanding the agent’s generativity, creativity and novelty of thought and consequent actions, relative to the deterministic case, where there is only one possible option, sufficiently caused before the agent even existed. In a deterministic universe there can be no de novo generation of representations to act upon. The agent becomes responsible for an imagined outcome, goal or commitment when he endorses it as his own and decides to try to realize that imagined outcome or goal in reality through concrete enactment. It is not that chance makes us more responsible because we were able to do otherwise in the world, it makes us more responsible because we were able to do otherwise in our minds before we committed to enacting one option over all the possible others. There is no capacity to do otherwise in either sense (namely in the world or in imagination) for a deterministic agent. But then again, I would reject that there is MR for a deterministic agent on other grounds discussed above at the beginning of this reply to you.

“As far as I can tell, the main reason LFW is worth wanting on your view is that things could have turned out differently, not that agents bear ultimate responsibility for self-formation (an impossibility under determinism) which then makes them blameworthy and deserving of punishment for who they’ve become, as distinct from punishment for consequentialist reasons such as deterrence.”

I think we can separate questions of responsibility for who we have become from the question of whether our past voluntary decisions to attain this kind of self versus that kind of self, and efforts to realize our envisioned self, have played a necessary causal role in who we have become. We do not have to be 100 percent self-forming. In fact, elsewhere on this thread I wrote that even a God could not be that. But even if we are only somewhat self-causing we have somewhat agentically caused being the being who we are now. If that is true, I would say that counts as a non-zero degree of ultimate responsibility.

The comments to this entry are closed.

Books about Agency


3QD Prize 2014: Marcus Arvan