Blog Coordinator

« The Manipulation Argument, Experimental-Style | Main | Winding Down »

08/30/2014

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Hi Marcus,

I'd like to suggest variants of the Luck Argument and the Rollback Argument:


4. The Rollback Argument:
This is not an objection, but a question:
What if we (or some other, more powerful and knowledgeable agent) were to roll back not only the universe, but all of reality - physical, non-physical, or whatever there is-, including the higher reference frame?
Would such agent be able to make accurate probabilistic predictions? (with probability 1 in case of determinism, else less than 1?)
In order for libertarianism of the kind you propose to exist (and assuming other objections, like the one I will sketch below, all fail), it seems to me such probabilistic predictions should not even be possible in the higher frame (or even in a frame higher with respect to that one, if there is a yet higher frame, etc.), but I'm not entirely sure of that in light of our brief exchange regarding divine foreknowledge. So, I'd like to ask what your reply to an "All Reference Frames Rollback" argument would be.

2. The Luck Argument:

The usual Luck Argument does not seem to work in my view, because at least one of the premises is [probably] an overgeneralization (so, it's false):

For example, let's say that the states of an intelligent agent (not necessarily human) before it makes a decision causally determines that decision *except* when those states give two or more different courses of action exactly the same expected value (given by the agent's evaluative function, and probabilistic assessment; in humans, different parts of the brain may have different evaluative functions, complicating matters a bit), and no other course of action has a greater expected value: in those specific cases, any of the courses of same maximum expected value may happen, but that is not causally determined - what is causally determined is that all other courses of action are ruled out.

Is the agent free in those cases?

I'm inclined to say it is (there are other examples), barring other problems not related to determination, and I'm also inclined to say that the choice is not *entirely* a matter of luck - even if underdetermined -, and the part that is a matter of luck is not the important one when it comes to freedom.

So, I think "(i) If an action is undetermined, then it is a matter of luck." is an overgeneralization if it's understood as "entirely a matter of luck", whereas if actions that are partially but not entirely matters of luck are included, then "(ii) If an action is a matter of luck, then it is not free." is an overgeneralization. So, at least one premise is probably false, in my view.

However, I don't think we need the generalizations (to make a "luck" argument), but it's enough to consider specific cases of indetermination - of the wrong sort, so to speak - (though perhaps this sort of objection might work better in the "disappearing agent" sort of arguments, but I would need to modify it; I'm not sure, but I will give the argument in this context).

Let us say, for example, that the previous states of agents A and B up to decisions D(A) and D(B) (say, S1(A) and S1(B) are all of those states) give some courses of action C(1,A), .., C(n,A) (and the same for B; C(1,B), etc.) some expected values V(1,A) > V(2,A) >...> V(n,A), where V(n,A) is a large negative value, and V(1,A ) is a large positive value (and the same for B) - if needed, we may even posit that the agents consciously assign numerical values, though this is not required I think. No other courses of action have been evaluated.

Yet, in spite of that, states S1(A) do not determine (causally or in any way, if there possibly is another way) that C(1,A) will happen. They do not even determine that one of the C(j,A) will happen, for some j. The same goes for agent B. Those previous states might make it probable that C(1,A) (respectively, C(1,B)) will happen, if there is some non-epistemic sense of probability (a matter on which I take no stance), but they do not determine it.

However, as it happens, in the first mental state after S1(A) (say, S2(A)), A intends to implement C(n,A) (if time and mental states are not discrete but are continuous, S2(A) will approach S1(A) indefinitely, but still to the extent we can talk about determined or non-determined states, the point of the scenario is that be that something will change in a way that goes strongly against the values up to that point), whereas in the respective case for B, in S2(B) she intends to implement a state that is not one of the C(j,B), and hadn't been considered before.

In those cases, my assessment is that neither agent made a free choice. In fact, it seems to me that from the description of the events, they made no choice at all - rather, the "choice" happened to them.
That's independent of whether or not there are souls. If humans or other agents have souls, I would still say we are not souls but souls and some other stuff, but regardless of what we are, the previous states I'm considering in S1(A) and S1(B) include any previous states of the souls if there is one, in any higher reference frame or whatever reference frame there is.

The upshot seems to be: [probably] while we do not need determination by previous states in order to have freedom, we do need causation by previous states, and causation of a specific sort. Lack of the proper sort of causation results in either non-free (or less free, depending on the case) choices, or events that are not choices in the first place.

Hi Marcus,

I have two questions about the "disappearing agent" argument. Consider the characters in the P2P videogame. They kinda-sorta have beliefs and desires; at the very least, they say things to each other which are naturally interpreted as attributions of beliefs and desire to self and others. But at the same time, we, the players in our higher reference frame, also have beliefs and desires. Would you agree that our beliefs and desires are *more relevant* to explaining the actions of characters in the game, than the beliefs and desires (or "beliefs" and "desires") of the characters?

Second, I ask you to say more about the relation between higher-reference-frame beliefs and desires and higher-reference-frame decisions. Do the beliefs and desires constrain the decisions at all? Do they make decisions more probable when the beliefs and desires favor them? Because if not, it's enough to give one a serious case of (Sartrean) vertigo.

The comments to this entry are closed.

Books about Agency


3QD Prize 2014: Marcus Arvan