Seeing as the month is rapidly growing to a close, this will unfortunately have to be my final substantive post as Guest Author. In my previous post, I explained how I believe my Libertarian Compatibilist theory of free will addresses the Consequence Argument against free will. In today's post, I would like to briefly sketch how I believe my theory addresses several other notable arguments against free will: (1) the Mind Argument, (2) Luck Argument, (3) Assimilation Argument, (4) Rollback Argument, and (5) Disappearing Agent Argument.
1. Libertarian Compatibilism and the Mind Argument
The Mind Argument, in a nutshell, holds that free will does not exist because our actions are caused by combinations of beliefs and desires, and it is not up to us whether the beliefs and desires that cause our actions occur.
Because I think it's a nice, concise overview of the argument, I'd like to quote Seth Shabo's presentation of the argument. Shabo writes (pp. 293-4):
To set up the relevant version of the Mind Argument, van Inwagen presents this scenario:
Let us consider the case of a hardened thief who, as our story begins, is in the act of lifting the lid of the poor-box in a little country church. He sneers and curses when he sees what a pathetically small sum it contains. Still, business is business: he reaches for the money. Suddently there flashes before his mind's eye a picture of the face of his dying mother and he remembers the promise he made to her on her deathbed to always be honest and upright[...[So]], he thinks the matter over carefully and decides not to take the money (1983, pp. 127-128)
Following van Inwagen, let 'DB' stand for the occurrence of the thief's desire to honor his promise to his mother, together with his belief that repenting and refraining from robbing the poor-box is te way to do this. And let 'R' stand for the thief's act of repenting and deciding to leave empty-handed...
On the assumption that DB alone is causally relevant to the occurrence of R, [van Inwagen] argues that N(R occurred) [i.e. the thief has no choice about R] can be derived from two premises. Thus, we have (p. 147):
(M1) R was caused but not determined by DB, and nothing besides DB was causally relevant to the occurrence of R. [assumption for conditional proof]
(M2) N(DB occurred). [premise]
(M3) If M1 is true, then N(DB occurred . R occurred). [premise]
(M4) N(R occurred). from (M1–M3) [by Rule beta, logic]
(M5) If R was caused but not determined by DB, and nothing besides DB was causally relevant to R’s occurring, the thief had no choice about whether R occurred. [from M1 and M4, by conditional]
Here is a sketch of how I want to respond to the Mind Argument. I believe that my Libertarian Compatibilist theory of free will entails that (M1) and (M2) are both false.
First, (M1) is false if Libertarian Compatibilism is true because on this theory, our beliefs and desires are not the only causally relevant factors to whether R occurs. According to Libertarian Compatibilism, the beliefs and desires we experience are -- much like Kant said -- experiences of physical information outside of us. Although our beliefs and desires incline us to act in various ways, on Libertarian Compatibilism we nevertheless have a brute capacity (outside of the physical world of information) to reflect on our beliefs and desires and decide whether to act upon them.
Second (M2) is false on Libertarian Compatibilism because, on this theory, the beliefs and desires we have now are partly the result of libertarian choices we made in the past (the thief is tempted to steal now in part because he made lots of bad libertarian decisions in the past, and his memory of his promise to his grandmother flashes before his eyes now because of past libertarian decisions that developed -- at least to some small extent -- feelings of conscience.
2. Libertarian Compatibilism and the Luck Argument
Let us now consider the Luck Argument. As Christopher Evan Franklin (p. 201) puts it, we can summarize the Luck Argument as follows:
It is a general assumption of libertarianism that at least some free actions must be undetermined...[and] the core of this problem can be characterized by the following two claims:
(i) If an action is undetermined, then it is a matter of luck.
(ii) If an action is a matter of luck, then it is not free.
Here is a sketch of how I want to respond to this argument. I wish to contest its first assumption: that libertarianism identifies free actions with undermined ones. On my Libertarian Compatibilist theory, are undetermined by physical law -- but they are not undetermined. Our actions, rather, are determined by us, where we are understood as brute, noumenal Kantian pure practical wills. On my account (much as on Kant's own account), we possess the brute capacity to hold principles of action (i.e. maxims) before our conscious minds, and will ourselves to act upon them. There is determination here, and there's even a sense in which it is determination by laws. But, as Kant himself put it, the laws here are -- in a brute, noumenal way -- laws of our own willing.
3. Libertarian Compatibilism and the Assimilation Argument
In two recent papers, Seth Shabo argues that the Luck and Mind Arguments are unsound, but then defends a new argument against libertarian free will: the Assimilation Argument. The Assimilation argument, in a nutshell, is that libertarians cannot "explain how ostensible exercises of free will are relevantly different from randomized outcomes thatnobody would count as exercises of free will." (Shabo 2013, p. 301)
Shabo develops his argument by first having us imagine a subatomic particle with a .5 probability of swerving in one direction rather than another. Clearly, Shabo contends, this is not an instance of free will: it is a randomized outcome. Now, however, move the subatomic particle into someone's (Alice's) brain, and Alice makes something a libertarian wishes to call a free decision not determined by physical law or mere randomness. Shabo's contends, in essence, that libertarians can point to no reason to think that this is ultimately anything more than the kind of randomized, unfree process that occurs with subatomic electrons outside of brains. So, Shabo says, libertarians about free will "cannot plausibly distinguish supposed exercises of free will from random outcomes that nobody would count as exercises of free will." (Shabo 2014, p. 152)
Here is a sketch of how I want to reply to this argument. My Libertarian Compatibilist theory of free will entails that we should observe violations of the normal quantum wave-function in human brains that do not appear to be random outcomes, but rather outcomes reflective of non-random exercises of libertarian free will in a higher reference-frame. According to Libertarian Compatibilism, each person's brain should instantiate its own unique violations of the Schrodinger equation. My brain will instantiate a unique "Marcus Arvan wave-function", Seth Shabo's brain will instantiate a unique "Seth Shabo" wave-function, etc. -- and not only that. As I explained here, just as people playing online simulation can dramatically change their playing strategies mid-game -- giving observers in the game the impression that their character is suddenly obeying new laws of behavior -- Libertarian Compatibilism predicts that when people in our world adopt very new patterns of behavior, their brain's personal wave-function should change...once again reflecting not randomness, but libertarian choice in a higher reference-frame.
4. Libertarian Compatibilism and the Rollback Argument
The Rollback Argument is an argument by van Inwagen similar in some ways to Shabo's Assimilation Argument. In brief, van Inwagen suggests that if we were to roll back an indeterministic universe to its beginning a large number of times, we would observe patterns of actions that appear merely probabilistic, not reflective of libertarian freedom. So, for instance, suppose an indeterminstic law of physics entails that Alice has a .5 probability of choosing A, and a .5 probability of choosing B. If this were the case and the Universe were rewound and replayed from the Big Bang 32,054 times, Alice would choose A in 16,027 of the replayings but B in the other 16,027 -- all of which, again, looks purely probabilistic, not libertarian freedom.
It should be fairly obvious at this point what my sketch to this argument will look like. According to my Libertarian Compatibilist theory of free will, for any given number of times we rollback the Universe, we will not be able to predict, in principle, how many times Alice will choose A or B. This is because Alice's decisions are not determined by any law of nature in the universe, even indeterministic ones. They are settled by Alice's consciousness in a higher reference-frame, in a manner such that, each time the universe is "rolled back", it is completely up to Alice -- and not to any law, probabilistic or otherwise -- to decide what she does next.
5. Libertarian Compatibilism and the Disappearing Agent Argument
My responses to all of these these arguments (including my earlier response to the Consequence Argument) bring us, however, to one final argument against libertarian free will: the Disappearing Agent Argument.
The Disappearing Agent Argument raises, I believe, the most difficult worry of all for libertarianism. Very roughly, the argument is that libertarianism cannot make any sense of how it is agents who make free choices.
To see what the problem is supposed to be, consider how we ordinarily understand intentional action. When we act, we act for reasons. We have beliefs, desires, and intentions. These things seem to comprise our agency. But, according to libertarianism, none of the brain/functional states that comprise our beliefs, desires, or intentions comprise free will. Free will is, according to libertarianism, a brute, unanalysable power X to choose one thing over another. But in that case free will doesn't look like agency at all.
Here is a rough sketch of how I want to respond to this argument. I want to begin by noting how this very problem would appear to observers trapped within an ordinary online videogame (which, again, my P2P Theory of Reality entails our world is functionally identical to). Observers in an online videogame would attribute intentional states to agents in their world (they would ascribe beliefs, desires, and intentions to one another). And yet...some of them -- the libertarians in their midst -- would have the hankering suspicion that the choices agents make in their world are not fully explainable in those terms, but somehow emerge from a kind of libertarian power not explainable in terms of their world's physics. And they would be right. After all, as we well know as the game's users, it is our decisions outside of the game -- in our reference-frame -- which are ultimately responsible for our characters' actions within the game.
Here, I think, is the first thing this shows: what appears to be a "disappearing agent" from one reference-frame ("libertarian free will" to observers within a simulation will appear to be brute powers, unattached to any coherent agential system) can be the result of complex agency in another reference-frame inaccessible from the first.
Obviously, since on my theory our world is functionally identical to a peer-to-peer simulation, this is what I want to say: that, for all we know, the "disappearing agents" of libertarianism are genuine, complex agents in a higher reference-frame to which we have no direct observational access.
Now, of course, the most natural objection to raise at this point is that this move simply bumps up the free will problem from one reference-frame (ours in this world) to a higher reference-frame. Won't whatever agents exist in the higher reference-frame in turn have to obey psychophysical laws in their frame (much as we appear to in our reference-frame outside of the simulations that we play)?
I raised and briefly addressed this issue in "A New Theory of Free Will", and I'm still inclined to stick by what I wrote there. In essence, I suggested that because we do not have direct observational access to the higher reference-frame that partly comprises our reality, there are two possibilities, neither of which we can probably have evidence for over the other:
- Our minds/choices are determined by laws in the higher reference-frame, or
- Our minds are genuine causa sui: genuine uncaused causes (Cartesian "thinking things") whose agency -- beliefs, desires, intentions -- is self-generated/self-comprised.
Obviously, only (2) is genuine libertarian free will. Alternative (1) is just a kind of nested determinism (determinism in our reference-frame brought about by determinism in a higher reference-frame, etc.). Moreover, alternative (2) is admittedly crazy. It is very hard to wrap one's idea around -- or take seriously -- the idea that we are ultimately causa sui, Cartesian souls or some such. And I think it is right to say that this is crazy. As one person recently put it,
"There are thousands of ways to be a compatibilist about free will, but there's only one way to be a libertarian, and it's the crazy way."
Be that as it may, I do not think it is our role as philosophical or scientific inquirers to decide from on high that crazy things aren't true.
First, the more science progresses, the more our world does look absolutely crazy. Quantum mechanics and relativity are both really, really crazy. Second, if my P2P Model of Reality's predictions are verified, we will have real evidence that quantum and relativist craziness are the result of a far deeper -- and crazier -- kind of craziness: namely, our reality being the functional equivalent of a massive peer-to-peer networked computer simulation.
As philosophical and scientific inquirers, we should follow our evidence where it leads. And, I want to say, if my model's predictions are verified, the most that we could ever have empirical evidence for is the disjunction between alternatives (1) and (2): namely, the proposition that either our actions are determined by laws not of our choosing in a higher reference-frame, or we are genuine, libertarian causa sui in that reference-frame (i.e. self-organized, self-caused causes) whose agency (i.e. conscious beliefs, intentions) is self-organized and self-determined.
In any case, even if some (or many) readers aren't willing to take this crazy disjunction seriously, I would to suggest that my theory sheds important new light on several things.
First, if the arguments I have sketched are correct, the Consequence, Mind, Luck, Assimilation, Rollback, and Disappearing Agent arguments can be resolved for our reference-frame (the world we experience) by virtue of phenomena in a higher-reference frame.
Secondly, however, the arguments I have given suggest that the ultimate worry that critics have about libertarianism can never be solved at the level of reference frame. There is, ultimately, only one way way to be a true libertarian about free will, and it is indeed "the crazy way."
Is the crazy way too crazy to take seriously, even as a disjunctive possibility? Again, maybe -- but, as I said before, scientific (and philosophical) progress increasingly seem to me to indicate that the world is truly crazy, and thus, that the crazy way might well be true. If my model's predictions are verified, we live in the functional equivalent of a massive peer-to-peer networked simulation. That is crazy...and if the world is that crazy, it might well be even crazier.
Hi Marcus,
I'd like to suggest variants of the Luck Argument and the Rollback Argument:
4. The Rollback Argument:
This is not an objection, but a question:
What if we (or some other, more powerful and knowledgeable agent) were to roll back not only the universe, but all of reality - physical, non-physical, or whatever there is-, including the higher reference frame?
Would such agent be able to make accurate probabilistic predictions? (with probability 1 in case of determinism, else less than 1?)
In order for libertarianism of the kind you propose to exist (and assuming other objections, like the one I will sketch below, all fail), it seems to me such probabilistic predictions should not even be possible in the higher frame (or even in a frame higher with respect to that one, if there is a yet higher frame, etc.), but I'm not entirely sure of that in light of our brief exchange regarding divine foreknowledge. So, I'd like to ask what your reply to an "All Reference Frames Rollback" argument would be.
2. The Luck Argument:
The usual Luck Argument does not seem to work in my view, because at least one of the premises is [probably] an overgeneralization (so, it's false):
For example, let's say that the states of an intelligent agent (not necessarily human) before it makes a decision causally determines that decision *except* when those states give two or more different courses of action exactly the same expected value (given by the agent's evaluative function, and probabilistic assessment; in humans, different parts of the brain may have different evaluative functions, complicating matters a bit), and no other course of action has a greater expected value: in those specific cases, any of the courses of same maximum expected value may happen, but that is not causally determined - what is causally determined is that all other courses of action are ruled out.
Is the agent free in those cases?
I'm inclined to say it is (there are other examples), barring other problems not related to determination, and I'm also inclined to say that the choice is not *entirely* a matter of luck - even if underdetermined -, and the part that is a matter of luck is not the important one when it comes to freedom.
So, I think "(i) If an action is undetermined, then it is a matter of luck." is an overgeneralization if it's understood as "entirely a matter of luck", whereas if actions that are partially but not entirely matters of luck are included, then "(ii) If an action is a matter of luck, then it is not free." is an overgeneralization. So, at least one premise is probably false, in my view.
However, I don't think we need the generalizations (to make a "luck" argument), but it's enough to consider specific cases of indetermination - of the wrong sort, so to speak - (though perhaps this sort of objection might work better in the "disappearing agent" sort of arguments, but I would need to modify it; I'm not sure, but I will give the argument in this context).
Let us say, for example, that the previous states of agents A and B up to decisions D(A) and D(B) (say, S1(A) and S1(B) are all of those states) give some courses of action C(1,A), .., C(n,A) (and the same for B; C(1,B), etc.) some expected values V(1,A) > V(2,A) >...> V(n,A), where V(n,A) is a large negative value, and V(1,A ) is a large positive value (and the same for B) - if needed, we may even posit that the agents consciously assign numerical values, though this is not required I think. No other courses of action have been evaluated.
Yet, in spite of that, states S1(A) do not determine (causally or in any way, if there possibly is another way) that C(1,A) will happen. They do not even determine that one of the C(j,A) will happen, for some j. The same goes for agent B. Those previous states might make it probable that C(1,A) (respectively, C(1,B)) will happen, if there is some non-epistemic sense of probability (a matter on which I take no stance), but they do not determine it.
However, as it happens, in the first mental state after S1(A) (say, S2(A)), A intends to implement C(n,A) (if time and mental states are not discrete but are continuous, S2(A) will approach S1(A) indefinitely, but still to the extent we can talk about determined or non-determined states, the point of the scenario is that be that something will change in a way that goes strongly against the values up to that point), whereas in the respective case for B, in S2(B) she intends to implement a state that is not one of the C(j,B), and hadn't been considered before.
In those cases, my assessment is that neither agent made a free choice. In fact, it seems to me that from the description of the events, they made no choice at all - rather, the "choice" happened to them.
That's independent of whether or not there are souls. If humans or other agents have souls, I would still say we are not souls but souls and some other stuff, but regardless of what we are, the previous states I'm considering in S1(A) and S1(B) include any previous states of the souls if there is one, in any higher reference frame or whatever reference frame there is.
The upshot seems to be: [probably] while we do not need determination by previous states in order to have freedom, we do need causation by previous states, and causation of a specific sort. Lack of the proper sort of causation results in either non-free (or less free, depending on the case) choices, or events that are not choices in the first place.
Posted by: Angra Mainyu | 08/31/2014 at 02:08 PM
Hi Marcus,
I have two questions about the "disappearing agent" argument. Consider the characters in the P2P videogame. They kinda-sorta have beliefs and desires; at the very least, they say things to each other which are naturally interpreted as attributions of beliefs and desire to self and others. But at the same time, we, the players in our higher reference frame, also have beliefs and desires. Would you agree that our beliefs and desires are *more relevant* to explaining the actions of characters in the game, than the beliefs and desires (or "beliefs" and "desires") of the characters?
Second, I ask you to say more about the relation between higher-reference-frame beliefs and desires and higher-reference-frame decisions. Do the beliefs and desires constrain the decisions at all? Do they make decisions more probable when the beliefs and desires favor them? Because if not, it's enough to give one a serious case of (Sartrean) vertigo.
Posted by: Paul Torek | 09/01/2014 at 02:41 PM