Blog Coordinator

« Passing the Torch! | Main | Free Will Around the Interwebs »

08/03/2015

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Sarah (if I may),
Fascinating question! I'd like to bring up a different angle - tell me if this is off-topic. A lot of beliefs have various degrees of self-fulfilling prophecy nature. Say I have to give a talk. "I'll be fine" and "I'll choke" could be anywhere from completely causally independent, to completely determinative and self-fulfilling prophecies, relative to the events of my giving a good talk, or choking.

I'd say that, for example, if P(fine|predicted-fine) = P(choke|predicted-choke) = 0.99, then it's both possible and rationally permissible to voluntarily believe I'll be fine.

Hi Sarah!

I’m thinking that there’s only one truth, and the truth isn’t a function of what any individual human believes.

People have freedom to discover the truth, but they don’t have freedom to determine the truth.

As each of us moves closer towards understanding the truth, our beliefs are subject to continual change, and there’s nothing wrong with that.

Sarah,

Some comments:

1. Talk about freedom and beliefs reminds me of Galen Strawon's classic book Freedom and Belief. It's most famous for his Basic Argument (which receives lots of attention here at Flickers) but most of the chapters are directed to questions about belief, like your question here.

2. I think there are many ways to parse your example, and these details might be outcome-determinative. In that sense, it is potentially under-developed (but still fascinating and worthwhile to consider).

The best and most natural interpretation, I think, is analogous to a chess engine. Chess is a game of perfect information. So the chess engine (I use Stockfish) has all of the evidence (the board position and state information) it needs. The evidence does not change, but the engine's belief about the best move can certainly change! After one minute, the engine says that the best move is pawn takes pawn. After two minutes, the engine says that the best move is rook takes pawn. Etc. In some positions, a number of moves can be roughly equal in strength, and the engine can oscillate between them. Sometimes the best move doesn't switch for 10 minutes of searching, 20 moves deep (roughly speaking), etc.

I think this example provides a good analogy for the best interpretation of your conference scenario. The evidence hasn't changed, but the amount of deliberation has. Perhaps it was unconscious, subsconscious deliberation. Perhaps an event triggered you to recognize the comparative importance of one item of evidence over another (without you necessarily being aware that this happened). In any case, as time goes on, we can change our views on what is best, just like the chess engine, and this is natural and to-be-expected without violating dominant views in the philosophy of action/belief. The contrary view is based, I think, on a mistaken view of the human brain as something Cartesian, infinite, or perfect, rather than just a spongy kludge that takes time to think, without ever reaching a perfect answer (for any problem of sufficient complexity and ambiguity), just like the chess engine does.

Thanks for the comments! James, your remarks are really helpful in allowing me to say a bit more about the problem. One thing that's worth noting is that with respect to many propositions, we actually are in a position to determine the truth. For those actions that are up to me, for example, I can determine the truth about whether or not they are performed. To borrow an example from Richard Feldman, if I want to believe that the light is on, I can accomplish this by turning the light on. So the claim that we have no choice about what to believe has to be understood as excluding the possibility of influencing our beliefs by way of performing some action.

I think you touch on an important implicit motivation for denying doxastic voluntarism when you note that "there is only one truth." This does seem like a deep contrast with the sphere of action, where (plausibly) there are many distinct and incomparable values, and thus different kinds of considerations that appropriately bear on the question of what to do. This seems to leave open a kind of latitude for an agent to determine for herself which values she will pursue. But if belief is governed by the univocal consideration of truth, such that there is only one kind of consideration that appropriately bears on whether to believe that P, it seems to follow that there is no comparable latitude for a thinker to determine for herself whether to believe.

I think this last step doesn't follow, though (not that I'm attributing it to you). I think it implicitly assumes a "time-slice" conception of belief, on which a thinker's beliefs are -- or at least rationally ought to be -- a function of whatever she takes to be true or best supported by the evidence she has *at a particular time.* The cases I'm interested in bring out certain diachronic patterns that aren't visible when you assess the relationship between a thinker's beliefs and her take on her evidence only at a particular time. Just to foreshadow where I'm going with this, I think these diachronic dynamics open up the psychological and conceptual space for the thinker to exert a genuine kind of control over her beliefs -- space that doesn't exist for merely synchronic, time-slice believers.

Kip, thanks for the chess example. I think it's compatible with everything I want to say about these cases to claim that there must have been further deliberation in the interim between forming a belief and later feeling inclined to have a different take on the same evidence, although I personally wouldn't want to commit to that claim. But what I do want to deny is that in every such case, the additional deliberation constitutes or makes available *new evidence* that bears on the question of whether P. This can certainly happen, but it need not; often, it's just a further attempt to assess what the evidence supports that may or may not be more accurate than your first attempt. And specifically when it's a result of what I'm calling "epistemic temptation," caused by corrupting influences like emotion, peer pressure, priming effects, etc., it's like to be less accurate.

Sarah, right I agree. I think the chess example makes clear that you wouldn't normally call the new deliberation to be new evidence. :)

Sarah,

Thanks for your reply. I agree with you, that it’s possible for an individual human to help determine the truth. But I’m also thinking that we need to keep things in perspective.

If a person performs an action (e.g., turns on a light), she effectively helps to determine the truth. However, in a bigger picture sense, her actions only affect an infinitesimally small portion of the truth (i.e., 1/n), and the entire balance of the truth (i.e., 99.99999…%) is something which she must discover – she doesn’t determine it. So with that perspective in mind, I’m thinking that it’s fair to say that people generally don’t determine the truth; they must discover it.

Perhaps the following is a reasonable definition for the term “the truth”: The set of all ideas that accurately represent reality for a given moment of time (including objective statements, but not subjective judgments). Does that sound about right to you?

Hi Sarah,

Thanks for the great post!

Near the end you wonder “whether it is psychologically and conceptually possible to maintain [your] belief directly”.

I’d like to request a little clarification of what you take yourself to be asking here. For instance, are you asking what it would be for you to maintain direct control over your belief? If so, are you asking what it would be to directly control the *content* of your belief, or the *attitude* that you adopt towards such content, or perhaps something else? And should we think that when you are maintaining direct control over (the content of?) your belief, that this is a causal relation?

I'm sure you'll address these sorts of issues in your next post, but prior to having a better sense of what it would be for a thinker to hold, maintain, or directly control her belief, it's difficult to answer your question about the relevant psychological and conceptual possibilities.

Thanks for writing such an interesting post. As a psychological matter I find the traditional “indirect view” that belief formation and retention is always the result of an unmediated connection to evidential concerns incredibly implausible. But I was wondering was the phenomena of doxastic self control an example for the “direct” view? I ask because the link between wavering in confidence about X and belief in the example seemed closely mediated by how the agent was evaluating her evidence that it was true, which makes it sound like the “indirect” view after all. On the other hand, it’s pretty hard to assess the philosophical significance of the direct/indirect contrast with respect to doxastic voluntarism in the first place given, as you point out, what we know about how we collect, evaluate, accept, and reject evidence, themselves highly sensitive to volition among other things. In other words, if the agent in your case is making a judgment to retain or reject X based on really subjective evaluations of evidence themselves disconnected from whether it's actually true then what does it matter whether "evidence" relates to belief judgments or not in a "direct" way insisted on by people in phil mind.

I was also wondering if there are some aspects of the example that may be biasing it would be interesting to control for. For example, conferences (though I don’t know about the APA) are places associated with people being receptive to new viewpoints and data that often lead to belief change. There’s also a quasi-normative aspect here because we don’t want high status jerks intimidating people in the way you described in our field, which resonate with us and could influence intuitions about the two questions you raised.

Hi Michael! I suppose I had in mind both the attitude and the content. Holding fixed whatever your favorite view is about the conditions under which a thinker counts as believing that P at time t1, is it in any interesting sense *up to her* whether she believes that P at time t2? I'm assuming that there is no room for anything worth calling 'volition' if her belief is entirely determined by what seems to her in the moment to be true in light of her evidence, either as a matter of psychological fact or as a conceptual truth. I'm also setting aside the sense in which it's up to her to influence her belief by performing some action, such as manipulating herself or the evidence. But I think it's an open question -- and a really interesting question -- whether any more direct form of volitional control over one's beliefs would need to be causal or not. Maybe not? It might end up depending on your theory of what it means to talk of 'volition'. Put it this way: whatever volition is, could volition help make it the case or explain the fact that the thinker believes that P at t2, except by way of some intermediating action?

Hi Sarah,

Thanks for the helpful reply!

It seems that you’re asking whether, once a thinker has genuinely come to believe some proposition to be true, whether her continuing belief in the truth of that proposition is under her direct volitional control, especially in the sorts of circumstances that you described above.

Assuming I've understood you, I think my worry might remain. My worry is that, so long as (what seems to me to be) the more fundamental question of whether direct volitional control over your beliefs is causal or not remains open, we will not be in a position to answer your question about the sense in which it is up to the thinker whether she *continues* believing that proposition over time. That is, to answer your question we need to know in the first place whether direct volitional control over belief is causal or not.

But, perhaps you think that we do not need an account of direct volitional control over belief prior to addressing your question?

Hi Sarah (if I may)--

What an interesting post--I like (and identify with below) the scenario you present. I wonder if in fact you're steering in the direction of examining rational individual belief in light of the recent discussions of epistemic peerage and the like.

It's fascinating how seemingly stray comments and even body language can influence our assessments of our own beliefs. At one conference where I presented a paper I felt fairly good about, the reception was so cool that I had doubts afterward about the force of anything I said. But one prominent senior philosopher (junior to me in years) said "It was extremely well-written and clear" which buoyed my spirits and my confidence despite the fact that s/he said nothing to evidentially boost my confidence about its content. So we are talking about attitudes about our beliefs that may irrationally twist our own credences about even our own beliefs?

One reason I think this is important is the "psychology of the die-hard adherent" not peculiar to philosophy but evident time and time again in the discipline. That is, someone, quite unlike the individual you describe, who stakes out claims (usually earlier in career-path), sticks to them through thick and thin as based usually on serial prominent publications, and evinces a confidence not susceptible at all to the slightest epistemic temptations you describe. So might epistemic temptation be a function of individual personality of self-doubt (arising from a multitude of factors) rather than some more objectified phenomenon like epistemic humility or charity or respect for epistemic peers?

Well I'm way out of my subdiscipline paygrade here (which, as a fellow faculty member of UW System, you know means less than it might for both of us!), so forgive me if I'm off the page. Looking forward to your next post.

Isn't there an uneasy mixture of psychology and epistemology here? If you were presenting at a scientific or a mathematics meeting or before a court, I could imagine a similar range of emotional states, but a different underpinning in terms of priors as to whether something is (or will be) actually knowable to you or to anyone else.

People are raising great questions and worries about the example I gave. David, I'm not certain I understood you, but is your worry that because of the subject matter, one should not actually believe that any philosophical view is true? Note that I didn't say in the example that I *knew* or took myself to know that my view is true, but maybe the idea is that in philosophy, one never even has sufficient reason to all-out believe in a view as opposed to merely assigning it some probability of being right. And if it was unreasonable of me to have formed the belief in the first place, this might matter for the question of whether it could be reasonable for me to maintain it in the face of conference-induced doubt.

I'm inclined to think that we are justified in forming all-out beliefs about philosophical questions, although I haven't said anything to defend that claim here. I think beliefs play an important cognitive role that credences are unsuited to play; they are attitudes that allow us to treat theoretical questions as settled, so that we can move on to other investigations and act on the basis of our conclusions (Richard Holton has a great paper called "Intention as a Model for Belief" where he sketches a view of belief like this that I'm very sympathetic to). But even if we're not justified in forming beliefs about philosophical questions, I think my question still stands: once a belief is formed, reasonably or not, is there anything to be said in favor of the stability of that belief that is independent of whether or not one's assessment of the evidence remains stable?

Alan, I think you're exactly right that vulnerability to what I'm calling "epistemic temptation" will vary from person to person, and that some people are more inclined to the counterpart phenomenon of dogmatism. One reason I'm interested in this question is that I think that there are structural and cultural forces that can tend to make disadvantaged groups more subject to bouts of underconfidence and advantaged groups more subject to overconfidence. Philosophical theories of epistemic rationality tend to be skewed toward preventing dogmatism, but this might be exactly what, say, women in male-dominated professions should not do -- perhaps they need to guard against bouts of underconfidence that would otherwise lead them abandon those professions at higher rates. I'm generalizing pretty wildly, of course, but the thought is that perhaps we shouldn't think of rational requirements as applying objectively to everyone in the same way, regardless of that person's environment, psychology, or other factors.

Wesley, you're definitely right that the example has a lot of confounding factors going on. In an earlier paper, I used an example that didn't involve all of the social and professional aspects that a conference brings with it. This was during the 2012 presidential election, and so I described myself evaluating all of the evidence about whether Obama or Romney would win, forming the belief that Obama would win, but then repeatedly abandoning my belief and redeliberating about the same evidence out of neurotic insecurity and strong emotions about the possible outcomes. This example has the advantage of taking other people out of the equation, as well as the "appropriate conference mindset," although I'm sure it has other confounds.

Interesting post! I'm looking forward to the next one!

When I think about your example, I'm wondering whether it might be useful to distinguish between what one believes and what one "maintains", in the sense of explicitly thinking or saying. One can certainly "maintain" the belief that X in the latter sense, but I'm not sure one can in the former sense, especially if one defines belief as a set of dispositions of a certain sort. But I'm not sure--the example is great, and it definitely hits home for me.

Hi, Sarah,

Great questions.
I'm tentatively inclined to say it's conceptually possible to regain your degree of belief (though I'm not sure that's holding on to your belief, rather than changing back to the probabilistic assessment you made before the epistemic temptation; epistemic temptation looks like a lower probabilistic assessment to me, and it might indicate your belief has already changed), though it appears psychologically impossible to me - but it may be psychologically possible to many others; I don't know. I also would be inclined to say it's epistemically irrational to do so when it's psychologically possible.

That said, I wonder whether my [tentative] disagreement with your assessments on the matter is linked to the question of the relation between beliefs and epistemic probabilistic assessments.

On that note, I would like to ask you whether you would reply in the same manner to parallel questions about probabilistic assessments, such as:
1. Do you think there is direct control over our epistemic probabilistic assessments too?
2. Do you think it's sometimes epistemically rational to use that direct control?

Framing the matter in terms of probabilistic assessments, the "epistemic temptation" scenario you describe seems to be (if I got it right) one in which your probabilistic assessment has already changed, and not by choice: you now find yourself assigning a much lower probability to the hypothesis that View X is true than you assigned before.

So, two (I think relevant) questions would be whether you have the power (or freedom, or sort of direct control, etc.) to make a choice and modify your probabilistic assessment of that hypothesis, changing it back to what it was before your change in probabilistic assessments happened, and whether it would be epistemically rational of you to do so.

When it comes to probabilistic assessments, I think it's conceptually possible, psychologically impossible at least for some of us but maybe not for others, and epistemically irrational when psychologically possible (if ever) to change our probabilistic assessments in that fashion.

That parallels my answers in the case of belief questions, but I'm more confident in the answers in the probabilistic assessment case. In the belief case, I have some doubts related to the relation between belief and probabilistic assessments.

Angra--and I do not wish to interlope here Sarah--what if probabilistic assessments are subjective expressions (a kind of expressivism) of conditional probability ala Jonathan Bennett's account of conditionals? Since such expressions are potentially influenced by many factors--especially emotional--then there may not be any objective basis for criticizing such changes as described by an account of epistemic temptation. I don't mean to say that an account of temptation needs to subscribe to a Bennett account of subjective expressions of conditional probability, but it seems to me that it may, and then beliefs based on such subjective expressions of probability would collapse together into one account. So the rationality (as it were) of changing assessments would be a simple matter of changed subjective probabilities. (I should say though I am a fan of Bennett's ingenious account, I'm not an adherent. That statement alone might be counted more in Angra's favor than what I have to say here!)

Alan,

I'm not sure how you construe "subjective" and "objective" in this context, so I don't know if the following counterexample works (please let me know if I'm getting this wrong), but how about the following case?

Let's say Bob reckons that Bush will probably be the next American President (for whatever reasons). However, he then reads that some astrologers reckon, on astrological "grounds", that Clinton will probably will" (source: http://abcnews.go.com/Politics/OTUS/stars-point-hillary-clinton-2016-run/story?id=16280688). Bob updates his assessment on that basis, and reckons that Clinton will probably be the next President.

It seems clear to me that Bob is changing his assessment in an epistemically improper fashion (if needed, let's assume he's between 30 and 50 years old, has lived most of his life in large American cities, and there are no extraordinarily odd circumstances in his life), regardless of whether Bob's original assessment that Bush probably will win was proper or not.

A real life example would be the astrologers cited on that page themselves, who are making (and updating) their probabilistic assessments on astrological grounds. Those are epistemically improper grounds for making probabilistic assessments - and also for changing them.

More generally, making or updating probabilistic assessments (which is also a way of making them) on the basis of methods such as tea leaves, astrology, biblical interpretation, palm reading, etc., is epistemically improper (I mean the sort of assessments in the example above; if one is assessing the probability that some astrologers will predict P, factoring in astrology makes perfect sense).

Before I go on with examples, I'd like to ask for clarification, because I suspect I may be misinterpreting your objection here, but as far as I can tell, a philosophical theory that holds that there are no epistemically improper probabilistic assessments and/or probabilistic updates (or that we have no basis to conclude that some assessments or updates are improper) is not true.

I've not much to say in reply Angra, because I agree with you! What I was trying to do is to come up with some account that would preserve as much of the account of temptation as possible but also offer an out to rational-updating doubters. At least in terms of conditional probabilities it struck me that Bennett's account might do the trick. As an account of material implication it is non-truth-functional and in terms of subjunctives/counterfactuals it also "drops truth". Maybe a case of just thinking out loud I should have kept to myself? If so, my apologies to Sarah, and to you too.

Thank you for the fascinating post. I have a question about how you are thinking of the situation. To my perspective, if I am understanding correctly, the phenomenon is real and important, and as you point out, philosophically and politically (not that the two can be easily taken apart). Am I right to take it to be a phenomenon connected to ones familiar from feminist philosophy; as Gaile Pohlhaus writes in "Relational Knowing and Epistemic Injustice", "feminist standpoint theorists have analyzed how power operates within and around particular social positions in ways that are epistemically significant. For example, if a person’s social position makes her vulnerable to particular others, she must know what will be expected, noticed by, and of concern to those in relation to whom she is vulnerable, whereas the reverse is not true." If this is the phenomenon, there are various different accounts of how this could lead to undermining of belief, and different accounts lead to different responses to your questions (pragmatic encroachment and contextualism both come to mind). But one can also of course just imagine that being in the situation you describe just leads to losing subjective confidence. And this leads to my question: are you are thinking of the situation as a loss of subjective confidence that leads to a loss of belief, and hence that it is epistemically rational to continue to retain the belief? If you are thinking of it this way then I would tend to think you are right that it is epistemically rational to hold on to the belief. But I would tend to disagree that we have direct control over this. That is, I would argue retaining the belief is only possible in unusual situations or something. I mean, if it were too easily possible, then we could hold socially disadvantaged people morally responsible for their disadvantage - morally responsible for not retaining their beliefs in the face of the kinds of high stakes situations that the disadvantaged face given conditions of injustice. And that seems to me a bad result. But I may be misinterpreting you?

Alan,

Thanks for the clarification and no need to apologize - sorry I didn't get your point earlier.

Hi Jason! Thanks for your comment. The general phenomenon I'm interested in falls more under your latter description: losing subjective confidence in a belief one has. I hesitate to put it that way only because I'm not sure what subjective confidence is, but that is the intuitive way to put it. I think this is very much a political issue, but I also think there are cases unlike my original example in which the factors affecting the thinker's confidence are less obviously due to social power structures (maybe my 2012 election case is like this). So I'm hoping to say something very general about the bare possibility of being more or less doxastically self-governing in all such cases, although I think it's important that this should complement the more specific work you mention from standpoint epistemology. One thing I really want to resist is a trend in thinking about the nature of belief that would condemn cases of relying on one's own past judgment as problematically "alienated." To me, alienation seems to be primarily an ethical notion, and I get uneasy when it's deployed without looking at other ethical features of the case, like whether relying on past judgment might actually be necessary for some people to overcome threats to their doxastic autonomy.

The question you raise has me worried, though -- I agree that it would be a bad consequence of my view if it turned out that we should hold socially disadvantaged people responsible for failing to exercise doxastic self-control. Actually, one reason I started thinking about this topic is that I think the recent emphasis on 'willpower' or 'grit' in the self-control literature in philosophy and psychology is politically problematic -- it feeds too well into neoliberal glorifications of self-reliance. Highlighting the role of unstable belief in cases that would otherwise get classified as a exhibiting a lack of willpower (e.g. people in poverty who fail to meet long-term savings goals) can be helpful in counteracting that trend.

I'll have to think more about it, but if I'm right that it's possible to exercise direct self-control over our beliefs in such cases, I don't think it necessarily follows that we should hold people responsible whenever they fail to do so. For example, it might be that for most socially disadvantaged people, the enabling conditions of doxastic self-control aren't met (whatever those are), such that it's not really possible for them. Or it might be that it is possible, but that there are other exculpatory factors at work. Still, it's an important worry.

Hi Sarah. Super interesting! Correct me if I'm wrong, but it seems essential to your target phenomenon that not only does the believer's evidence (that p) remain the same but also that she has no reason to believe now that she will deliberate (about whether p) any better than when she last settled on the belief that p. This way, the believer is not re-opening deliberation because she believes she'll do a better job this time. If I got this right, (A) I wonder how common this phenomenon really is? The more common phenomenon, very close to yours, is where the believer re-opens deliberation because she believes that her earlier deliberation might have been flawed or that she is in a better position now. (B) In your example, the fact that the evidence now strikes her differently (new information) could be a reason for her to think one of the two things I just mentioned, making this an example of the more common phenomenon and not yours. So I worry that your target phenomenon is not very common. How bad would it be for your view if it were not common?

Hey Devlin! You're right in your characterization of my target phenomenon, except that I don't think I'm committed to the strong claim that the thinker must have *no* reason to believe that she would now deliberate better than she did in the past. As you point out, the fact that the evidence now strikes her differently might lend some support to the proposition that she made a mistake in her previous deliberation. I think all I need is the weaker claim that the thinker mustn't have *sufficient* reason to believe that her past deliberation was mistaken, or that she would do a better job if she redeliberated now. After all, the mere fact that the evidence now strikes her differently equally well supports the proposition that her current perspective is corrupted. As long as her higher-order evidence doesn't conclusively tell her which perspective on the evidence is (or is very likely to be) corrupted, I think there is space for something worth calling a choice about which way to go.

Still, you might be right that this kind of case is not common -- I'm genuinely not sure. But I think it might still be an important kind of case even if it's uncommon. If we do face moments like these at least sometimes, and if we are in a position to determine for ourselves how to respond, then I think this would ground the possibility of a kind of "doxastic autonomy" or self-governance that isn't just a matter of having one's beliefs be determined by the evidence as one sees it. I find this idea exciting, and a needed rebuttal to the pretty common claim that belief is "transparent" to evidence, or that we are "passive in response to" the evidence, or "at the mercy" of it.

But also, it might be that these cases are infrequent but especially ethically and politically significant, for the reasons mentioned above in conversation with Jason and Alan. I'm inclined to think (although it's largely an empirical question) that moments of instability in our beliefs about our own abilities, or whether our actions are likely to succeed given the circumstances, play a key role in causing people to abandon difficult long-term projects (Beri Marušić has great work on this). And so if there are even infrequent moments where the capacity I'm talking about could be the difference between finishing a PhD or not, sticking to a long-term savings goal, staying on the sports team, etc., then that would be cool!

The comments to this entry are closed.

Books about Agency


3QD Prize 2014: Marcus Arvan