Blog Coordinator

Knobe's X-Phi Page

X-Phi Grad Programs

« Society for Philosophy of Agency | Main | CFA: Moral Psychology and Poverty Alleviation »

01/30/2012

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Eddy Nahmias

Hi Wesley, since no one is weighing in, I'll just say that I really like these experiments. You get very clear results using what seems to me to be good materials to test the hypothesis. Did you use more than just this scenario? I ask just because this one includes a non-standard feature--you have an organization (FBI) as the subject of the mental states. Do you have others with individual persons as the subject? (Yes, I know Mitt, corporations are people too.)

Wesley Buckwalter

Hey Eddy, thanks for weighing in! So in the paper I do four different types of scenarios. And the scenarios are tied very closely to the work presented by Hazlett (2010) and discussed by Turri (2011). The general thinking was that knowledge sentences in those specific cases constituted the evidence that a defender of factivity need explain. But I totally agree with you that there's a worry in this case that talk of 'the FBI' generally might cue participants to not give strict literal readings. So I ran a follow-up experiment to make sure the results for 'know' replicated when people were asked about a specific person and not just a group:

Officer Ted asks the Police Sergeant, “Is there any information from your contact at the FBI about how the bomb was constructed?” The Sergeant told him, “Yes. Agent Smith told me that, from his investigation, he Φs the bomb was homemade.”

This was followed by the same true/false complement clauses as before, and then people were asked if 'Agent Smith thought he Φ the bomb was homemade’ or ‘Agent Smith Φ that the bomb was homemade’. And with these changes in place the results seemed to clean up a little more. So for instance, 92% of those in the false ‘know’ condition gave the projectionist answer ('thought he Φ'). But alternatively 93% gave non-factive answers (‘Agent Smith Φ-ed’) for false ‘believe’, 83% for true ‘know’, and 94% for true ‘believe’. So it could be that people's tendency to favor projectionist readings are even more pronounced the more explicitly described the particular protagonist is to be projecting into.

Dave Maier

Here's another data point, of fairly limited significance, but it has stuck in my mind. When I was about 10 or 11, I had a conversation with a peer which went something like this:

Me: You can't *know* something unless it's *true*.

Him: I can *know* that 2 + 2 = 5.

Me: No you can't, because 2 + 2 = 5 is false. You can't *know* it unless it's *true*.

[repeat until frustrated and bored]

In any case I don't think it has that much philosophical significance that philosophers often use words in particularly circumscribed senses, while others sometimes use them differently in other contexts. Words have different senses; you just have to keep them straight.

Still, I agree with my 11-year-old self that there's no point in using "know" that way (i.e. non-factively) when we have a perfectly good word for that already (= "believe"). But it seems like some people do.

Eddy Nahmias

Similar to Dave's case, I'd say about 10% of my undergrad students seem to use 'know' in the sense of 'justified in believing', since they agree with (and strongly defend) statements like: "People living thousands of years ago *knew* that the earth was flat and that the sun revolved around the earth." I feel the pull of this intuition, perhaps in part because people back then seem at least as justified in believing those things as we are justified in believing many of the things we seemingly know. (It might also be that some people are using 'know' to mean 'believe really strongly' which would also explain their response to 'knowledge' about the earth being flat, not to mention "I know that God exists," etc.)

This will betray my ignorance of contextualism, but can someone tell me why it is (as I think it is) that contextualists do not say that those people living long ago *knew* the earth was flat.

Wesley Buckwalter

So the precise details about how factivity (in the sense that no false things can be known) is implemented might depend on the particular epistemic theory considered. But in many cases, I think a lot of contextualist (/non-contextualist) theories just end up stipulating it. There is a nice discussion of this question in the first chapter of DeRose's book. When considering just how low the low epistemic standards can go, say, past just having a true belief only, "Why stop there? Why for instance, think that even the truth of p is always necessary for the truth of 's knows that p'? DeRose ends up appealing to ordinary language and PP here when setting the floor for 'knows'. Though one could defend a c-theory with low standards that low if one wanted!

Robert Hvistendahl

Sorry I don't think this is relevant to this thread, but I wrote my opinion on knowing truths.

Does something need to be true for one to know it?

Does it depend on the way we define know?

Let's not forget that we produced the word 'know', we created it for our own use, we defined it. So we need to be careful that we are not arguing that the definition is one such thing or another, as we choose it's definition. And we can choose for it to mean that something would have to be true in order for one to know it, should we? Or do we? Very interesting questions and I love your experiments. After thinking about it for a while, in order to reach a conclusion about whether we do or should, one would actually have to define truth, but I haven't given that a go for a number of years. So I'll talk about what goes on in one's head when he 'knows'.

It's important to realize that talking about knowing is talking about a process of the mind, it's very relevant to think about the way people feel when they think they 'know' something.

If you tell me the FBI knows the bomb is homemade, and later we find out that it is not, then either
1.The FBI only thought they knew.
Here, knowing --> truth, in this case, you lied to me because you should have said, the FBI thinks they know that the bomb was homemade.

2.The FBI really did know
Here, knowing does not imply truth and people can know false things. You did not lie to me.

I wouldn't agree with an FBI agent who said, I know the bomb is homemade in any case, he may strongly suspect it was, but he shouldn't claim to know such things. Maybe my overall conclusion here is that we have no way of understanding what knowing is, so I guess claiming to know is the closest we can get, in which case, one can claim to know something that is not true.

We can't map our thoughts, we don't have an encyclopaedia with different types of thoughts in them. Both the specific physical process by which electrical impulses cross synapses and such (im not a neural doctor so I don't know how it works exactly) in the particular part of the brain that is excited when we feel that something is true (if this mechanism in the brain exists) and the sensation/s (I use the term sensation very loosely, just to mean any of the experience we have, including our thoughts/feelings, emotion, touch, any information that is available for one's interpretation) we experience (our own feelings) when we feel something is true, are too complex and not within our grasp to reproduce, not like words we can list in a dictionary. I can't give you a thought, you can't feel what I feel. This is a hindrance in our quest to understand 'knowing' and truth.

Were we able to distinguish between what goes through the head of an FBI agent when they *know* the bomb was homemade in a case where it actually was and a case where it was not, we would immediately be able to define two different words, one meaning 'to know because it is true' and the other being to know even though it is not'. If there was a difference in the two cases, this would be strange as the brain would have to know the real truth at the same time the FBI agent did not. There cannot be a difference in these two processes as the brain does not know everything (I assume it doesn't)

If there was a particular neuron in the brain that was excited every time we experienced a truth and we could see into somebody's brain and see when this neuron was excited, we would know if things were true.

So in the end it does come down to how we define 'know', maybe it's that simple.
What are we doing when we try to define this concept, we are trying to put a label on a particular process, it's a persistent process that occurs every day in our life. But since we are unable to find any grassroots property/s that are common to each time we know, it may be fruitless to define such a word. For example I feel a certain way when I see my red curtain and think, this is red. And I feel a certain way when I think, it is blue. I can't experiment with these two different thoughts because I don't have magic experimental tools and the thoughts are not like material objects in front of me. If I could I would dissect them with a scalpel, or boil them in a beaker to find boiling point, or bounce alpha radiation off of them to try to understand what the differences between the two are.

Warren Olson

There is of course the old argument here that people used to KNOW the world was flat!
My interest in the subject focus's on how epistemology differs between cultures.
I found a project quoted in Nisbetts Geographpy of thought ( and no doubt elsewhere ) was very enlightening ; It concerned placing Asian and then US students in front of an aquarium.
Around 90% of the Asian contingent commented on the overall picture, range of colours etc ; whereas a similar % of the US Students, commented on the 1 especially large fish in the tank.

Sorry if I have strayed a little from the subject, but any comments/similar research along these lines wouold be much appreciated.

Jennifer Nagel

Wes, thanks for the paper.

Warren, here's another line on the Nisbett program: http://languagelog.ldc.upenn.edu/nll/?p=478

Allan Hazlett

Very cool experiments. Experiment 1 (from the paper) is probably the most interesting, for my part, because it involves an alleged non-factive use of "knows." The other experiments don't, or at least don't obviously, involve uses of "knows" that look non-factive. (I don't think we can expect to get intuitive non-factive uses by taking an intuitively factive use of "knows" and appending to the story an ending that says the proposition known was false, as in the Crab cases and the FBI cases in the paper.)

I'm not sure that all this ends up supporting the truth condition on knowledge. It seems consistent with the idea -- this is what I want to say in the new paper -- that when "A" is factive, and when "S A's p" factively presupposes p, it isn't because "S A's p" entails p. We can get factive and non-factive uses of "learns"; but this means that factive presupposition isn't in general down to entailment. (If the connection between "S knows p" and p's truth is a matter of factive presupposition, then we don't, so I argue, have support from that connection for the truth condition.)

It would be really interesting (for epistemology) to see how people respond with some classic factives like "perceives," "remembers," and "sees." My hunch is that these are going to fall in with "learns" and "realizes."

Given the option of (A) "Everyone thought they knew" and (B) "Everyone really did know," the correct answer is obviously (A). (Experiment may confirm that, but one could have seen it coming.) It's interesting to think about what people would say in response to an open-ended question, like "In the story, did people know that stress caused ulcers?" Given the contrast between "really knowing" and "merely thinking they knew," the answer is obvious. Given the (A)/(B) choice, the contrast for "really" is made explicit. There are other possibilities, though. As in:

(1) Did Zinn really learn that World War II made the world safe for democracy? Or did he just use that as an example of educational corruption?

(2) Did Zinn really learn that World War II made the world safe for democracy? Or was his “learning” really ideological misinformation?

Question (1) is a question about Zinn's personal history; question (2) is a question about global military history. The answer to (2) is: Zinn didn't really learn. The answer to (1), we can imagine, is: Yes, Zinn really did learn that.

Upshot: it's not surprising that people say that people didn't really know stress caused ulcers, when the contrast for the "really" is such that the question, of whether people really knew, is a question about ulcers. The answer then is obvious: they didn't really know (i.e. stress doesn't cause ulcers). But it's also possible to imagine contexts in which the contrast for the "really" is such that the question is about people's mental state. E.g. "Did they really know that stress causes ulcers, or were they just pretending to think that, to get funding for their anti-stress drug?" I'm not sure what people's answer would be there, but the hunch is that you could get more people saying that they "really" knew (i.e. they weren't pretending), when the contrast is that they were pretending to believe.

Wesley Buckwalter

Hey Allan, thanks for these comments! I think you’re totally right that nothing in these experiments claims to comment on the issue of presupposition and semantic entailment (so there’s no evidence here against the hypothesis that uses of 'know' presuppose, rather than entail, that the propositional complement is true). That surely remains an alternative explanation of these data that the non-factive person could evoke. Though given the strong revisionist aims here, the burden of proof seems to be on the supporter of the existence of a non-factive folk concept to (i) provide some solid evidence that holds up experimentally that non-philosophers (well, in my studies English-speaking, 35 y/o Americans, at least!) have a concept of knowledge that is non-factive (say, in the face of protagonist projection concerns or whatever else) and then once that is done, do this further thing (ii) showing its at the level presupposition. It's not clear that there's been sufficient empirical evidence for these claims.

The latter thing seems more difficult to test, though at least one quick thing we could try is to see whether it’s possible for someone to know propositions that couldn’t possibly be true.

About the contrast issue, I’m pretty interested to know the effect different answer choices in the DVs might be having on people’s responses here. So I’ll try going ahead and using qualitative measures like you suggest (this has also been suggested to me by Eric Schwitzgebel). My prediction FWIW is that we’ll still get highly projectionist answers. In the paper (and above in response to Eddy), I also mention one follow-up experiment that uses slightly different choices (‘Which do you think best describes Agent Smith?’…‘Agent Smith thought he Φ the bomb was homemade’ or ‘Agent Smith Φ that the bomb was homemade’). The result of that study was that people ended up giving more non-factive answers for ‘believes’ as the previous studies across true/false complements. So that’s at least some evidence that the contrast, as far as using ‘really’ is concerned has some effect here (though this particular one at least didn’t seem to be something that was affecting people’s projectionist judgments about ‘know’).

About one of your specific examples, I think people would tend to say that everyone ‘really knew’ (i.e. they weren't pretending) more than ‘they were pretending’ about the ulcers simply for pragmatic reasons (i.e. it’s really clear that whatever they were doing they weren’t pretending). Just for fun, I'll mention that if you watch Marshall’s Nobel talk on this (I’ve actually heard some of this talk at OSU recently last year) you get a sense of just how genuinely convinced the medical community was that they were right, and ironically, how horrified Marshall would be if he read that people think that that's what counts as knowledge!

harvey brockman

Hi Wesley,

A few interesting phrases from sentences at the end of the first paragraph of your post, and then at the beginning of the second: "he was absolutely right"; "we now know"; "did the doctors working before Marshall really know" -- How do you mean the "really" to be functioning in this latter sentence? If its function is to mark a kind of historically-developing CERTAINTY (regarding, in this case, a causal explanation of peptic ulcer), would it be your suggestion that we say: "In the pre-Marshall past, doctors 'merely knew' (or just 'knew') that stress was the cause of peptic ulcer, whereas today we 'really know' the cause to be H. pylori"? And with "he was absolutely right" is it your further suggestion that Marshall's explanation has "passed over" (maybe there's a better phrase for this!) from the "empirically-true" (so potentially revisable) to the a priori "logically-true" realm (that is, the Marshall explanation isn't just likely not to be over-turned, but it can make no sense of talking about its "being over-turned")?
Cheers, hb

The comments to this entry are closed.

3QD Prize 2012: Wesley Buckwalter