Experimenter bias is the well-established tendency, throughout the behavioral sciences, for experimenters to be biased by their own expectations. Basically, there is a pretty powerful effect whereby people tend to find whatever they expected to find.
One really beautiful example comes from the work of Rosenthal and Fode. They took students from a psychology course and asked them to serve as experimenters in a study they were running on how rats navigate mazes. Some of the students were told that their rats were 'maze-bright'; others were told that their rats were 'maze-dull.' (The researchers cooked up some story about how the rats actually differed genetically.) All of the students were then asked to run an experiment on how well their rats navigated the maze, and when the results came back... it was clear that experimenter bias had struck again. The students who were told that their rats would be maze-bright actually reported faster maze-running times.
Now, in experimental philosophy, we don't usually interact one-on-one with our participants, so one might think that we would be immune from this kind of problem and could simply ignore it.
Not so fast. A new paper by psychologists Brent Strickland and Aysu Suben argues that the problem of experimenter bias can arise even in experimental philosophy. The difficulty, they suggest, lies not in the way we interact with our participants but rather in the way we construct our vignettes in the first place.
To illustrate this problem, Strickland and Suben ran a kind of meta-study. Participants were told that they were going to be helping out in the process of designing an experiment. But now comes the trick: Different participants got different information about what the hypothesis was. (Some were told that the hypothesis was that people see group agents as having non-feeling states but not feeling states; others were told that the hypothesis was that people see group agents as having feeling states but not non-feeling states.) Then all of the participants had a chance to come up with little vignettes that would be used in the study.
Once all of the experiments were designed, Strickland and Suben actually went ahead and ran them all. Then they did an analysis looking at the difference between the results of the different experiments. Here is how it came out:
As you can see, the experimenter bias effect was pretty pronounced indeed. People who started out with different expectations ended up designing experiments that generated very different results (even when the experiments themselves were run in a completely fair and unbiased way).
I definitely see the problem here, but I'm not quite sure what we could do to find a solution. Any thoughts?