In my last post, I tried to draw attention to the way in which a thinker’s ability to assess what her evidence supports can fluctuate over time. This is in some ways similar to the effect temptation can have on an agent’s evaluative judgment, causing temporary distortions in what she perceives as desirable or worth doing. We can identify these episodes as bouts of temptation by reference to their typical causes – strong emotion, certain environmental stimuli – and by their temporal profile, which tends to be relatively brief and followed by regret.
The question I raised concerned whether it is possible to weather epistemic temptation without manipulating oneself. If I formed the belief that P in the past, having taken my epistemic reasons sufficiently to support that conclusion, can I continue to believe it even if those same reasons now strike me as inadequate and P no longer seems to me to be true? Specifically, is it up to me whether to continue to believe that P in a way that doesn't involve inducing belief by performing some intermediary action like refusing to think about it, going to sleep, and so forth?
In the comments, John Fischer rightly pointed out that we should distinguish between actually continuing to believe that P in such cases and merely “maintaining” that P, which would involve asserting that P and even thinking to oneself that P, but which would fall short of genuinely believing. This certainly does seem like something we can do to get through bouts of insecurity, and should perhaps be treated as the null hypothesis here. So my question is, can we do more than merely maintain that P – is it up to us genuinely to continue to believe, without some mediating action? Can we exercise doxastic self-control with respect to our beliefs over time?
I’ll argue that we can in a roundabout way, by attempting to diagnose why many philosophers have thought that it’s impossible, and explaining why I think we should reject this line of thought. I will call the line of thought the “Transparency View,” and although I don’t claim that any one philosopher has explicitly advocated for the view quite as I’ll present it, I think it has wide appeal.
It starts with the deep and important insight that belief is an attitude that embodies a thinker’s take on what is true. The Transparency View holds that this fact about belief is centrally manifested by a conceptual constraint on doxastic deliberation: that the deliberative question of whether to believe that P is transparent to the question whether P, in that the former question can only be settled by considerations bearing on the factual question of whether P is true. As Nishi Shah characterizes the thesis, “… as long as one is considering the deliberative question of what to believe, these two questions must be considered to be answered by, and answerable to, the same set of considerations. The seamless shift from belief to truth is not a quirky feature of human psychology, but something that is demanded by the nature of first-personal doxastic deliberation.”
A second context in which transparency is often said to be manifested is in a thinker’s capacity to know what she believes. The idea here, influentially advocated by Richard Moran, is that a rational thinker need not wrack her brain or theorize about herself in order to know whether she believes that P. Rather, because a rational thinker’s beliefs are settled by what she takes to be true, she can come to know that she believes that P at any point in time simply by reflecting on her reasons for thinking P is true and avowing her conclusion as her belief. That is, the question ‘do I believe that P?’ is held to be transparent to the question ‘is P true?’
Perhaps surprisingly, it’s meant to be consistent with both of these appeals to transparency (if not exactly highlighted) that most of us are in fact woefully irrational thinkers. We all believe many false things, and our beliefs frequently do not comport with our evidence. Knowing this, how can I be justified in taking my beliefs to be settled by my evidence, let alone true? The answer, I think, is that transparency only holds from the first-person perspective. If I adopt a “theoretical,” third-personal point of view on myself, conceiving of myself as any other person who is subject to cognitive error and frequently mistaken, the possibility comes into view that my beliefs are false or unsupported by my evidence. But when I adopt the first-personal perspective, my potentially fallible mind is rendered invisible to me; it does not allow me to entertain the possibility that my beliefs might deviate from what fact and evidence require. As Bernard Williams famously argued, reflection about what to believe makes no essential use of the concept of the first person: “When I think about the world, and try to decide the truth about it, I think about the world, and I make statements, or ask questions, which are about it and not about me.”
The problem with adopting a third-personal perspective on yourself is that it exhibits alienation or estrangement. It is possible to do, and sometimes necessary; if my belief that P is recalcitrantly resistant to my take on whether P, I will have to theorize about myself or pay an analyst to do it for me in order to know about it. And I can certainly reflect in general on my cognitive failings and devise strategies for improving my epistemic success. But with respect to any particular belief, adopting the third-personal perspective is a symptom of a rational breakdown, in that my judgment has failed to determine what I believe in the way it ought to. Transparency is thus understood as a rational ideal for belief as well as a conceptual constraint on doxastic deliberation.
The latter constraint is what purports to rule out the possibility of believing at will. To believe at will, the belief would need to be the conclusion of a process explicitly aimed at issuing in a belief, but that didn’t appeal only to evidence. For if the thinker does appeal only to questions bearing on the truth of P, then she’s simply oriented toward her belief in the ordinary epistemic way; this would not count as believing at will. But if she appeals to non-evidential considerations in her effort to form the belief, such as that it would be worthwhile to believe that P, then she has violated a conceptual requirement for being engaged in a process aimed at issuing in a belief. By her own lights, the considerations she has mustered do not settle the question that must be positively answered if she is to believe that P – namely, whether P (as Pamela Hieronymi has emphasized). Whatever issues from this process would explicitly fail to meet the correctness norm on belief, and thus would not count as that state.
Finally, advocates of transparency have held that treating a past judgment as settling for oneself what one should believe now exhibits a similarly problematic form of alienation, more akin to programming oneself than being a rational thinker. As Matt Boyle writes, “I do not recall what I believe about whether P unless I recall what now looks to me to be the truth as to whether P. What I call to mind must be not merely my past assessment of it, but my present assessment of it – the assessment that currently strikes me as correct.” At each point in time, the thinker is unalienated from her belief that P only if she now actively holds P to be true on the basis of some set of reasons deemed sufficient. If she only believes P because of some earlier judgment, the objection is again that she must adopt a third-personal perspective on herself, seeing her judgment as a kind of intervention on herself to bring it about that she has a certain belief.
To summarize, if the Transparency View is correct, then it’s impossible to weather epistemic temptation in a way that’s not self-manipulative, epistemically irrational, or problematically alienated. For one thing, it would require my taking a past judgment to settle for me now what to believe, and this would involve third-personal estrangement from myself. Even worse, this isn’t something I could possibly choose to do. That would require asking the deliberative question of whether to continue believing that P, and this question would have to be treated as transparent to the evidence as I now (in my corrupted state) see it.
Okay: so what’s wrong with this view? This post is getting long, so I’m again going to have to quickly say something provocative and leave a more thorough defense for the comment thread and for my next post. I think the arguments we have (briefly) considered rest on a mistaken conception of what the first-personal perspective must be. The mistake is to presuppose that the first-personal perspective is necessarily synchronic, encompassing only the present moment. Although it is never made explicit, the ideal of transparency in fact demands this presupposition. If it were allowed that multiple, conflicting answers to the question “Is P true?” are first-personally available, the relation between truth and belief could not be conceptually or metaphysically unmediated. And yet, memory does give us access to appearances of the truth that can differ from what now appears to be true. It may even be possible to have psychological access to a future take on what is the case. If these past and future stances were not implicitly ruled out as part of a thinker’s first-personal perspective, transparency would fail; the capacity to avow the belief that P on the basis of reflection on evidence would require the mediation of a further answer to the question ‘When?’
I think we should reject the idea that we are rationally limited to occupying a synchronic, present-directed perspective – that the first person is essentially indexed to ‘now’. It is open to me to conceive of myself as occupying a genuinely diachronic first-personal perspective that encompasses past, present, and even future assessments of the truth as potentially my own. I don’t think a diachronic self-conception is required in order to be a believer, or to possess the concept of belief, but I don’t think it is ruled out either. Let us grant that possessing the concept of belief involves accepting a norm of correctness to the effect that a belief is correct only if it is true. It does not follow from accepting this norm that I am compelled to believe whatever my evidence now seems to me to require. As a reflective creature, I am in a position to recognize that my capacity to evaluate what is true vacillates over time. I can therefore see that the best way of satisfying the norm of believing P only if it is true may not be always to let my present perspective determine what I believe. A legitimately epistemic concern for the truth does not compel me to conceive of past and future judgments as akin to the testimony of another person, something that could never settle for me what to believe unless I now take the further step of deeming it to be sufficient first-order evidence. Rather, I can consider all of them mine, and therefore candidates for constituting my considered stance on what is true.
If this is right, then when I face a conflict between what the evidence seemed to support in the past and what it seems to support now, I have a choice about which way to go. I am not required to take my present perspective to speak for me, since I conceive of the past perspective as equally mine. I can elect to continue to believe that P on the basis of my past perspective, even though it conflicts with how things seem to me right now, without going against my own best judgment. Rather, I take my best judgment to consist in my earlier rather than my current perspective. The deliberative question about whether to form the belief that P might necessarily be transparent to one's current take on the evidence for P, but for the diachronic thinker, the deliberative question about whether to continue to believe need not be transparent in this way.
So you'll get rid of knowledge as stable belief? Those models suggest belief that changes with new facts is not knowledge, but in your example beliefs are changing without any input - "no cognitive possession that can be rendered doubtful seems fit to be called ‘knowledge’".
People do differentiate between strength of a belief and strength of an epistemic position, which might fit in with the scientific ideal: "I am quite attached to X and will defend it vigorously, but would change to Y if presented with Z". I don't see that as alienating, but as reasonable.
Posted by: David Duffy | 08/07/2015 at 02:55 AM
Right -- I don't think knowledge is just stable belief (for one thing, I think the belief has to be true). But I don't think anything I've said so far has direct implications for our understanding of knowledge. I do think it's sometimes rationally permissible to change one's beliefs without a change in evidence (and so, for example, I don't accept the Bayesian Conditionalization principle in an unqualified form). Claiming that a belief is rationally permissible doesn't amount to saying that it counts as knowledge, though.
Posted by: Sarah Paul | 08/08/2015 at 02:13 PM
Hi again. This stuff is fascinating! Okay, what I'm about to say is half-baked but I hope not incoherent. I wonder whether the mere fact that a first-person perspective is genuinely diachronic suffices to avoid the problem of self-manipulation, epistemic irrationality, and alienation. A person is a genuinely diachronic thing, yet I can fail to identify with my past persons (so to speak). I am the same person now as I was when I was 11 years old, but I’m not now that boy in some other way—some other way that is more really me. Analogously, a first-person perspective may be a genuinely diachronic thing, yet I can fail to identify with my past perspectives in the very same way that I don’t identify with my past persons: my past perspectives and my present one are the same perspective—mine—despite changes, yet I might not identify with all or all parts of them. It seems like when the evidence strikes me differently than it once did (in times of epistemic temptation), I become distanced from the relevant past perspective. Why, on your view, do I continue to identify with that past perspective? I can, of course, admit that the relevant past perspective and my present perspective are the same perspective—mine—but why do I continue to identify with this perspective despite the difference in the way the evidence strikes me now? I realize that this is an unfair question, since I haven’t characterized what this deeper kind of identification is, but it can’t merely be the first-person perspective itself, on your view, since the same first-person perspective can undergo changes of ‘appearances of the truth', some of which I will no longer identify with. I guess I’m really just asking you to say more about what a genuinely diachronic first-person perspective consists in.
Posted by: Devlin Russell | 08/09/2015 at 06:07 PM
Never mind. I can see now how the choice to go with how things once struck me and continue to believe can identify me with that previous perspective. But how can it be the best way of satisfying the norm of believing P only if it is true to go with my past perspective, since I can't tell which perspective is in a better position to determine the truth?
Posted by: Devlin Russell | 08/10/2015 at 06:07 AM
@Devlin, yeah, these are really good questions. I ultimately do want to say more about what the first-personal perspective is, but it's a difficult task. But I think you're right that it's not enough to explain temporal identification just to point out that it's possible to adopt a diachronic first-person perspective. It's a necessary condition, but not sufficient.
What I want to say about the choice to identify with one perspective over another is that you don't need a reason. I don't think it can be done for a pragmatic reason, but I also don't think any further epistemic reason is needed. You have to see it as consistent with the norm of believing P only if true, but you don't need further evidence to tell you that your past perspective is the correct one. Since by hypothesis, that evidence isn't accessible to the thinker in the cases at issue, the only alternative would be that she is required to withhold belief. But this seems to me too severe; it would often require that the thinker abandon a belief that was in fact correct. It would be an expression of valuing the avoidance of error above the pursuit of significant truths, and I don't think this is required.
Just to be clear, it's not that I think you end up believing that P for no reason. Your reasons just are the first-order reasons you took to be sufficient at the earlier time. It's just that I think you need no further reason to guide your choice to either retain your belief or give it up in these situations. It's a kind of radical choice, I guess. What do you think?
Posted by: Sarah Paul | 08/10/2015 at 11:17 AM
I think the answer to 'what is a diachronic first-person perspective (on the world)?' will help to explain and justify an answer to 'what is the believer required to do when she is faced with epistemic temptation?'. For instance, if what unifies a perspective over time is something like a pursuit of significant truths, then we could see why the believer would side with her previous perspective (as you would like). Does that make sense? I'm just spit-balling now. Ultimately, I'm just very intrigued by this idea of a diachronic believer or knowledge-pursuer.
Posted by: Devlin Russell | 08/10/2015 at 11:56 AM
Right, that makes sense to me. It might even depend on a particular thinker's epistemic goals, such that someone who prioritizes the avoidance of error above all else really should just withhold belief in our cases, whereas someone with different epistemic goals is permitted to continue believing. I like the idea that these goals might play a role in structuring and unifying a believer's perspective over time.
Posted by: Sarah Paul | 08/10/2015 at 12:07 PM