Blog Coordinator

Knobe's X-Phi Page

X-Phi Grad Programs

« Knobe and Baumeister on Self-Control and Agency | Main | Deep Trouble for the Deep Self »

06/20/2010

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Jennifer Nagel

Hi Angel,

Lovely paper.

I like the "evidence-seeking" experiments, but I wonder whether they support IRI as directly as you think. On one classic formulation of the IRI vs. intellectualism debate what separates the two sides is whether stakes themselves matter to whether one's true belief constitutes knowledge. You've found that high-stakes subjects are expected to collect more evidence than their low-stakes counterparts in order to count as knowing; I wonder whether high-stakes subjects are expected to collect more evidence than their low-stakes counterparts even just to make up their minds (=to have an outright belief on the issue at all).

I'm a bit puzzled to see you cite my 2008 paper as arguing that our intuitions on the Bank cases are the products of cognitive error; actually, I argued that these intuitions (for Stanley's version of the cases) are completely correct, but do not support IRI over traditional intellectualism exactly because we see High-Stakes as still seeking evidence where Low-Stakes has already made up her mind. The subjects we are evaluating in these cases are not evenly matched for evidence and degree of outright belief.

Yes, DeRose stipulates in his version of the case that High has equal confidence, but it's not clear that this stipulation is consistent with the other narrative content in the case (after all, High says he feels the need to go into the bank and "make sure", which suggests he feels that it is an open question whether the bank will be open). We could read him as having equal confidence in the sense of assigning the same subjective probability to the proposition that the bank will be open, but it is not clear that equal confidence in that sense equally guarantees the presence of outright belief across high and low-stakes (as Weatherson has argued). I noted that it's possible for High Stakes subjects to close on an outright belief on the basis of skimpy evidence, but to get them to close one needs to put them under conditions like haste or distraction (Kruglanski and Webster 1996: "Motivated Closing of the Mind"). But because such conditions compromise accuracy, we may be responding to that shift in accuracy when we deny knowledge, rather than responding to stakes as such. In any event, it's worth keeping an eye on the relationships between stakes, confidence and accuracy here: even if certain patterns of response correlate to shifting stakes, it is possible that those patterns are really tracking variations in confidence and accuracy ordinarily produced by changes in stakes. (Or so the intellectualist will argue.)

Your discussion of contextualism at 26ff is interesting. I think DeRose has been stressing lately that speakers who are trying to predict or explain the behavior of other agents will naturally select epistemic standards appropriate to those agents -- "the *speakers' own conversational purposes* call for such subject-appropriate standards" (2009, 240). He might say that the purposes we'd naturally have in evaluating your (**) on page 26 really do shift between John and Peter, where it's not so natural to shift purposes in the more abstract context of evaluating those statements about hands and BIVs on your page 28. Just a thought.

Best, JN

Angel Pinillos

Thank you for the comments Jennifer. Yes, I thought that the confidence issue you bring up could be addressed in the ignorant high stakes case. For in that case, the protagonist should be as confident as the protagonist in the low stakes case. But there was a stat. sig. diff. between the two. So I am tempted to think that the high stakes subjects are not thinking that more is needed to believe. Can you suggest another manipulation on these experiments that could further alleviate your worry?
Also, yes I messed up that footnote about you. I should just say that you thought that the intuitions do not support IRI (I should not have added that you thought the intuitions themselves are mistaken).
And about contextualism and the juxtaposed cases, it is possible for contextualists to drop the "salience of error" account and adopt a view where the speaker takes the standards of the agents they are talking about (even two different ones in one breath as would be required given my results). But It is not clear to me why that couldnt also happen when talking about people in skeptical and normal settings (in one breath). Why would the abstract nature of the topic make a difference? I guess I would like to see that view worked out. My point in the paper was just to say that IRI has a really elegant explanation of this and contextualism does not. thanks again for your comments!!

Jennifer Nagel

Hi Angel,

Yes, there's some difference in your results between Low-Stakes (median=2) and Ignorant High-Stakes (median=3), but you should be worried that there is a much bigger difference going up to Perceived High-Stakes (median =5). If the stakes are the same, why are Ignorant and Perceived High Stakes so far apart?

I've argued that the hindsight bias is playing a role in those Ignorant High-Stakes cases; once we know what John has to lose, we'll have some tendency to evaluate his epistemic behavior as though he also knew his stakes (even when we are explicitly aware that he does not). You can test this by looking at manipulations that augment and diminish that bias.

You might also want to test Ignorant Low-Stakes: what happens when subjects are described as thinking of themselves that they are in high-stakes, but nonetheless reason/collect evidence as though they were in low? Prediction: we won't like that.

And incidentally I agree that contextualism has a big job to do in explaining just what circumstances make various epistemic standards appropriate, and IRI may be better positioned there, at least for some cases -- although there are other cases that are more problematic for IRI.

Angel Pinillos

Hi Jennifer. Yes, the medians are as reported but there is some danger in taking them to mean more than they do. There is still no statistically significant difference between Ignorant High stakes and high stakes (p=.099). But there are stat. sig. differences between ignorant high stakes and low stakes and also between high stakes and low stakes. However, I do agree that the issues you raise are probably having *some* effect. And that if you run ignorant low stakes (as you suggested) you will be able to see them more clearly. The issue, however, is whether these considerations can fully account for the data. If they could, then I think the numbers should have come out different. My view is that IRI still gives us a very elegant explanation of the data (and it has independent plausibility). Yet, I agree that further studies would be needed to fully rule out other hypothesis. Also, about the hindsight bias, do you expect that more reflective agents will be less likely to succumb to it? I can run a CRT (cognitive reflection test) on subjects and see if there is a connection between intelligence/reflectiveness and responses. I did this for low/high stakes and found no effect.

Jennifer Nagel

I should say I'm using "hindsight" as a label for a broader category that could be better labelled "epistemic egocentrism", the tendency to project one's own privileged concerns or beliefs onto more naive agents. (Hindsight strictly speaking involves a skewed assessment of one particular more naive agent -- one's own past self.)

There's a relationship between cognitive ability and certain egocentrism problems (Stanovich & West 1998) but not others (Stanovich & West 2008). I think the effect you are looking at is more like the between-subjects positive-outcome egocentrism problem of the 2008 paper, which is not sensitive to cognitive ability. But you may find something more direct if you dig into the literature on this.

harvey brockman

Hi Ángel,
Thanks for posting your work; just my first observation concerning the relationship between your intro. and conclusion:
The thesis with which you introduce your paper -- "Whether an agent knows P may depend on the practical costs of her being wrong about P" -- is certainly not the same thesis as "Ordinary people's attributions of knowledge may be sensitive to practical interests". Your paper's conclusion regards "evidence" in support of this second ("Ordinary people's attributions...") thesis. And it also contains the suggestion that this evidence supports the "Whether an agent knows P..." thesis. But "Whether an agent knows P..." is certainly no empirical thesis; I would say that the business of experimentation and evidence gathering is not relevant to its defense (or rejection). You don't mean to connect the two theses in this way do you?: the thesis "Whether an agent knows P..." implies the thesis "Ordinary people's attributions..."
Best Regards,
Harvey

Angel Pinillos

Hi Harvey. Thanks for your comment. I discuss the issue you raise in my paper. It is possible to think that whether knowledge is sensitive to stakes is something that is pretty much independent of evidence from the folk. But this is not required. It is possible to think that evidence from the folk is highly relevant. One way of working this out is by saying that sensitivity to stakes reflects semantic or conceptual competence (so folk judgments would be pertinent). This is a perfectly coherent thesis and I think it is one that is supported by the data that I present here. In the paper, I also discuss other ways of working out the idea that folk data is relevant to establishing the thesis that knowledge is sensitive to stakes. Another thing to keep in mind is that I think that defenders of IRI often appeal to folk attributions of knowledge to defend their thesis (not that its the most important thing) so I think that the assumption is not exotic. Yet another thing to keep in mind (which is not directly relevant to your comment) is that IRI entails the denial of contextualism. Evidence from the folk is relevant for assessing contextualism, so trivially, it is relevant for assessing IRI.

harvey brockman

Hi Ángel,
If we say our epistemological question is "What is knowledge?", we can't exactly begin our investigation with the hypothesis "Semantically/Conceptually competent subjects will display 'sensitive-to-stakes' attributions of 'knowledge'", can we? After all, we won't have criteria for what we're going to count as "competent" unless we already have criteria for what we're going to count as "knowledge". And that -- "What is knowledge?" -- is our question.
Perhaps what I don't understand yet is how you are proposing to LINK the questions "What is knowledge?" and "How do the folk attribute knowledge?". Of course, if the thesis "Knowledge is sensitive to stakes" IS the thesis "When the folk use 'knowledge', they use it in a way that is sensitive to stakes", then gathering folk-use-of-'knowledge' data is called for. But "When the folk use 'knowledge'...." isn't, I would say, an "epistemological" thesis in the sense that "Knowledge IS sensitive to stakes" is an epistemological thesis.
Regards,
Harvey

Angel Pinillos

Hi Harvey,
Thank you for your comment.I don't see that I "begin our investigation with the hypothesis that Semantically/Conceptually competent subjects will display sensitive-to-stakes attributions of knowledge". I begin by considering the thesis,supported by standard philosophical arguments, that knowledge is sensitive to stakes. I then note that ordinary speakers seem to behave as if it is true (anecdotal evidence). Informed by this, I form the hypothesis that sensitivity to stakes is something that falls out of the meaning of 'knows'. I am certainly free to form such a hypothesis and to go out and test it. Since ordinary speakers will know the meaning of 'know' then their behavior will be relevant to assessing the hypothesis. But now if the hypothesis is true, then we get that knowledge is sensitive to stakes (an epistemic thesis). The reasoning I just went through is one way to get the link between folk behavior and facts about knowledge (you were wondering about this link). I don't want to say that folk behavior is relevant to all theses in epistemology or anything like that. It depends on the case. Certainly once we start talking about meaning, folk behavior matters.

harvey brockman

Hi Ángel,
Let me try this re-statement of your argument as I understand it from your last post:

1. Ordinary speakers know the meaning of "knowledge".
Therefore, 2. If ordinary speakers say that knowledge is sensitive to stakes, then "sensitivity to stakes" is part of the meaning of "knowledge".
3. If "sensitivity to stakes" is part of the meaning of "knowledge", then knowledge is sensitive to stakes.
4. Ordinary speakers say that knowledge is sensitive to stakes.
Therefore, 5. Knowledge is sensitive to stakes.

It seems to me that there must be great gaps between some of the steps here (and/or underlying, unarticulated assumptions), and so this re-statement can't be right, but i am struggling to do any better at the moment...

Regards,
Harvey

Angel Pinillos

Hi Harvey,

I would not accept that formulation (and I do not know anyone that does). It is way too strong. Perhaps the closest thing to your reconstruction that I can endorse can be gotten by replacing (2) with the following:

(2)* If ordinary people behave as if knowledge is sensitive to stakes in the way that the results from the paper reveal, then this is some new evidence that sensitivity to stakes is constitutive of the meaning of 'knows'.

(modifying the other stuff accordingly) The conclusion will then have to be changed to this:

(5)* there is some new evidence that knowledge is sensitive to stakes.

In order to fully show that knowledge is sensitive to stakes you also need to do other things. A lot of this work has been done by other philosophers in very skillful ways. In my paper I say that my task is to provide some new evidence that knowledge is sensitive to stakes (and that this evidence is less well explained by traditional contextualism).

harvey brockman

Hi Ángel,
Let me ask this: Why does (2)* NOT read "If ordinary people behave as if knowledge is sensitive to stakes in the way that the results from the paper reveal, then this is some new evidence that the ordinary person's use of 'knowledge' includes a 'sensitive to stakes' component"? And so, why does (5)* NOT read "There is some new evidence that the ordinary person's use of 'knowledge' includes a 'sensitive to stakes' component"?
I was supposing that you were perhaps using "Knowledge is sensitive to stakes" as a shorthand for "What the ordinary person means by 'knowledge' includes a 'sensitive to stakes' component", and that the philosophical issue did not -- could not? -- go beyond that. But that doesn't seem to be the case, from what you say in your latest reply. So what are the SORTS of things one needs to do "in order to *fully* show that knowledge is sensitive to stakes"? This "fully showing" must be, I imagine, the "philosophical" -- as distinct from the "science" -- part of the experimental philosopher's two-part activity. (I take this characterization of experimental philosophy from a James Beebe post of a bit back...)

Angel Pinillos

Hi Harvey,
Let me try to answer your questions. First, those modifications you suggested would be acceptable. But they are not in competition with my versions of 2* and 5*. I just need that my version of 2* be plausible (for the purposes of this blog thread)
Second, to fully show that knowledge is sensitive to stakes you need convincing evidence that it is. I am not fully satisfied with just relying on the experimental evidence we have up until now. More work should be done. Fortunately, there are other arguments out there which further support the idea that knowledge is sensitive to stakes. When I put it all together, the case for the thesis looks pretty good.
Finally, I am not sure about the "two-part" activity point (science and philosophy). I do not want to claim that the thesis (knowledge is sensitive to stakes) CAN only be "fully" shown to be true if one uses, among other things, philosophical methods. Supposing there is something that answers to "philosophical method", this modal claim seems too strong for me to endorse.

harvey brockman

Hi Ángel,
Thanks for going through these inquiries with me; maybe now i've caught up with the argument of your paper. Can we say it is this?:
1. The behavior of ordinary people is evidence for determining what a word means.
Therefore, 2. If ordinary people behave as if knowledge is sensitive to stakes in the way that the results from the paper reveal, then this is some new evidence that sensitivity to stakes is constitutive of the meaning of the word "knows".
3. The ordinary person does behave as if knowledge is sensitive to stakes.
Therefore, 4. There is some new evidence that sensitivity to stakes is constitutive of the meaning of the word "knows".
Regards,
Harvey

Josh May

Very interesting paper! It's great to see the use of new methods for approaching the issue of folk sensitivity to stakes. Here are some thoughts, for what they’re worth.

(1) Previous Studies

Your main complaint about our previous failures to find significant folk sensitivity to stakes is that asking subjects about a sincere assertion of knowledge (or denial) creates a bias for subjects to just agree with the protagonist, via e.g. the rule of accommodation (p. 9). But Feltz & Zarpentine used Stanley's exact cases, which include a sincere *denial* in High Stakes; yet they found that subjects tended to slightly agree on average (see p. 41, n. 6). They’re mean for High Stakes with the denial of knowledge was *reverse-scored* (see n. 4) yielding a mean of 4.26. So presumably it would have been around 3.74 originally, which is on the *disagreement* side of the midpoint. Yet the 4.26 reverse-score is extremely close to what we got for the same sort of case with the ascription rather than denial: 4.6 (see Table 1, p. 270). The sample sizes might not be large enough to really tell a great deal here. But this is some evidence that people tend to *disagree* with the protagonist when she denies herself knowledge in just the same way as they *agree* when she attributes it. This is at least some positive evidence that the rule of accommodation is not creating a bias here. (Wesley's new study, posted recently on this blog, might be relevant here as well.) Or am I missing something?

You also register, in the first paragraph of sect. 4 (p. 10), another worry about the previous studies. You worry about whether between-subjects studies can get each group of participants thinking the protagonist has the *same epistemic position*. You say this is especially difficult since subjects are usually only given one case. But we did a within-subjects design and found no difference. There was an order effect, but the juxtaposition in general didn’t change things. So I’m not sure about your raising this as an issue at all.

(2) Performance Error Explanation

Like Jennifer, I worry about your basic idea, i.e. about whether using "evidence-seeking" experiments do clearly support IRI over other views, especially intellectualist ones. I'm not sure that enough is done to rule out some sort of performance error or pragmatic explanations of all the results. For example, it could plausibly be, I think, that the subjects tend to glide over the knowledge part of the scenario and focus more on the practical situation instead. Although only one protagonist has high practical interests, they both face salient practical problems.

Yet the only measure taken to make sure people weren't just reading it this way seems to be your comparison in Study 3 of the group’s results with the subset who did well on the CRT (n. 37, p. 22). But I’m not sure this rules out all contrary explanations. Your characterization of the potential objection in that footnote seems a bit stronger than it need be. You suggest the rival explanation has to say subjects gave an "improper reading" of the knowledge-prompt and ignored half of it. As you say, that's pretty implausible. But couldn't the objector say more plausibly that subjects were simply picking up on the more salient features of the scenario?

Just to throw it out there, the context to me seems like one in which it would be rather natural to just read "How many times should Peter proofread before he knows there are no typos?" as "How many times should Peter proofread for typos?" My off-the-cuff suggestion is that there might be a plausible way to construe a kind of performance "error" model that even the high CRT-scorers might be susceptible too. I put "error" in quotes because this model holds that it's a natural pragmatic phenomenon (or something like that), which might not be much of a reprehensible error. After all, some pragmatic phenomena needn't really be construed so strongly. Perhaps a way to test this would be to separate the issues out a bit. You could ask subjects (in the first experiment e.g.), “Peter [John] proofread his paper 2 [5] times. Does he now know that there are no typos?” The number of times correspond to what the subjects previously said he should do. So if that then matches up with a question that is explicitly just about knowledge, that might be more solid evidence that they are connecting the two.

Much this involves empirical claims. But they’re worries nonetheless, though perhaps not devastating or anything. In any event, you’ve gathered some very interesting data. Thanks for posting the paper!

Angel Pinillos

Hi Josh, thanks for the comments.

About (a) from Wesley's comments on the certain doubts blog, it looks like there is no reverse scoring on his studies. So subjects are simply agreeing with whatever the protagonist says there across the eight conditions. I think this is pretty convincing evidence that assertions or denials of knowledge in the vignettes themselves are affecting responses (which is not supposed to be part of the design of these experiments, or at least they should not be until we really know how they are affecting responses--also their predicted effects will depend on the background theory (IRI, contextualism etc)). I think we should try experiments without them (i did, and I got stake sensitivity).

About the F+Z paper, I looked at it again. I am not sure about how the reverse scoring is supposed to work. Is 4.26 really the mean (for high stakes) or did they actually get 3.74 (and report 4.26 in the paper). In appendix II, the description for High stakes has "7" as "strongly disagree" just as it does for the other ones so there is no reversal in the scales found in the surveys. If the response really is 4.26 then people disagree with the denial which presumably interpreted to mean they accept that there is knowledge (supporting your point-although we can question this), But then where is the reverse scoring? Im not getting this. At any rate, if you are right about F+Z then this result goes directly against Wesley's results, so its a wash.

Finally, I am not sure that running a within subjects experiment ensures that the evidence perceived is the same across scenarios. What is the evidence that this is the effect of doing a within subjects experiment? Maybe it does better but I don't see that it adequately alleviates the problem.

(b) Yes, it is possible that there is some natural pragmatic phenomenon that can explain why people replace the "knowledge" question for another one (that is not about knowledge). I would like to hear a specific proposal about this so we can test it. I proposed one in the paper and I tested it. I think people are relying on some (perhaps implicit) principle connecting knowledge and the rationality of action. But this just supports the thesis that folk attributions of knowledge are indeed sensitive to stakes.

Lastly, I have thought about your suggestion of saying "Peter has proofread 5 times, does he know?". It is an excellent suggestion!!

Josh May

Hi Angel,

On (1):
Good point about Wesley's new data. However, it's very new and much of the details aren't out there; he just has the short write-up. So I'm not sure how much weight to put on it just yet.

You're also right that F&Z aren't as clear about that reverse scoring as one would hope. I think my comment wasn't very clear either! I just noticed a couple of confusing typos. Sorry about that. But I think it came across well enough. As you point out, it just depends on whether I'm reading F&Z correctly. Maybe I'll email one of them directly and ask.

Josh May

Angel,

Adam Feltz confirmed that their original, non-reverse-scored mean for their High Stakes case was 3.74. Also, I didn't mention previously that it's a bit difficult to navigate this issue because their scale is the reverse of what we used! They have 7 as strongly DISAGREE while we had it as strongly AGREE. So, F&Z have 3.74 in High Stakes where Hannah denies herself knowledge and 3.68 in Low Stakes where she attributes it to herself, which are both around slightly *agree*. So it seems their subjects were on average slightly agreeing with both the self-ascription and denial. Furthermore, their mean for High Stakes then is not quite like ours since it was on our agree side of her attributing knowledge (4.6). So, to be clear, our subjects tended to agree with High Stakes Hannah's ascription of knowledge to herself while F&Z's subjects tended to agree with High Stakes Hannah's denial of knowledge to herself.

So my attempted rebuttal to your objection was a failure! It would be really interesting to test this hypothesis out in a bit more detail. For example, a 2 x 2 study varying Stakes and Attribution-Valance. This could be really important for the epistemology debates but also for x-phi generally.

Of course, I would think that the epistemic views on offer would at least initially predict the relevant responses regardless of the attribution's valence. That is, if stakes-sensitivity really is this obvious, widespread, common phenomenon that philosophers thought was clearly exemplified in these cases, then the predictions should have at least *initially* been independent of whether it it was attributed or denied. Maybe not for DeRose since he fairly explicitly thinks this matters. But not for Stanley and some others. It's not that they can't slightly modify the relevant argument for their views now in light of this. I just think it's still important and interesting that we didn't get the drastic change in judgments that arguably many expected.

On that note, that's something that I find somewhat lacking in your method of testing for stakes-sensitivity (via evidence-seeking cases). We can't really see *directly* whether stakes changes people's ordinary *judgments* about knowledge. At least, it's not so clear. Seeing it affect their practice in some way or other is one thing; an explicit change in judgment about a case based on varying simply the stakes is another.

Anyway, just some thoughts.

Angel Pinillos

Hi Josh, just a few comments in response to your post. (1) I think my probes do in fact show an "explicit change in judgment about a case based on varying simply the stakes". The two conditions vary only with stakes and people tend to make different (incompatible) judgments concerning knowledge (by inserting different numbers for the evidence required). (2) Anyways, I think if we are testing an invariantist theory we don't need an assertion or denial in the vignettes themselves at all. Why not test the bank vignettes without them? (3) I think there are ways that the IRI person can account for the different effects of denials and assertions in the vignettes: a sincere report (denial or assertion) in the vignette gives the subject a reason to think the knowledge claim is true, which in turn gives them a reason to think the stakes at issue are different from what they initially thought (since knowledge is sensitive to stakes). Hence, assertions or denials in the vignettes affect the perceived stakes (for IRI). This is an unintended effect of this test design. (4) I think, however, these criticism of the x-phi studies also apply to the thought experiments of traditional philosophers. I think traditional philosophers should be more careful when they say that such and such response to a thought experiment is widely held or "natural". I think this is one of the lessons I have learned from your studies.

The comments to this entry are closed.

3QD Prize 2012: Wesley Buckwalter