Conducting our research
Managing research difficulties and experimenter bias
Prerequisites
Watch this video for a simple explanation of the difficulties conducting psychological research.
Watch this video to help you understand experimenter bias
Difficulties with research
Complexity: We collect data on a very complex human behavior: language.
Variation within individuals: The same person can give us a different measurement from one time to another. If we ask them to rate how grammatical a sentences is, and then ask them to rate the same sentences 30 trials later, they might provide a different response.
Variation between individuals: The way one person responds to something might be different from the way another person responds to the exact same thing. For example, in a reaction time task, children might respond significantly more slowly than adults, and even within a group, every children might have a different baseline level of response speed. In rating scale studies, each person might differ in the way they use the rating scale: one person might never rate anything lower than a 3, while another is happy to rate lots of things a 1.
Measuring changes people: Demand characteristics are aspects of experiments that influence how participants respond. In our experiments, we usually sit in the room with participants, which might make them feel pressure to perform or be “correct”; or they might try to read your reactions and adjust their responses based on what they think you're thinking. In rating scale studies, we know participants make assumptions (that there are the same number of correct and incorrect answers on the “test”, for example) and these assumptions will alter the responses they make.
These problems are always there, but we can be aware of them and adjust our research studies to try to reduce their influence and get more accurate data. In our lab we do this by:
Measuring an individual’s response to a given stimulus multiple times, then taking the average of their response.
Normalizing data after it is collected before comparing across individuals. For example, instead of comparing participant’s raw reaction times, we normalize their reaction time data (using z-scores or similar methods) and then compare the normalized reaction times.
Being aware that the way we frame the experiment might change the way participants perform the task, so we keep the framing of the experiment the same for every individual participant.
Being aware that sitting in the room with the participant can influence the way they respond in the experiment, so we ensure that we keep this consistent across individuals and groups (e.g., if we sit in the room with kids, we should also sit in the room with adults).
Being aware that any changes we make to the experiment can have an influence on the data we collect. Before implementing any changes - no matter how small they seem - we run them by the entire research team.
Running multiple conditions of an experiment in exactly the same way, under exactly the same circumstances. This allows us to ensure that any changes we observe across conditions are likely due to the variable we are manipulating (and not the demand characteristics of the task).
Experimenter Bias
There are a number of ways in which our experiments can be influenced by experimenter bias. For example:
We sit in the room with the participant. Just like with Clever Hans, if the experimenter knows what the participant should be doing (e.g., what results we are hypothesizing), she might be giving off some unconscious cues to the participant. Importantly, the experimenter may not even realize she is doing this.
We provide feedback to the participant. Because our participants are responding to questions or making decisions right in front of us, they often look to us to tell them whether or not they are correct. The experimenter might, without realizing it, offer feedback differently for responses they feel are “correct” and those they feel are “incorrect”.
We sometimes know what condition the participant is in. Sometimes, even though we try to avoid this, it is not possible for the experimenter to be “blind” to the condition the participant is in. Just like the “bright” and “dull” rats, the experimenter’s expectations about what will happen in each condition could cause them to (1) judge the child’s responses differently or (2) unknowingly send signals that influence what the participant does. The same is true for our transcribers and coders - their knowledge about the experiment’s hypotheses could influence the way they score the data.
What lab systems do we have to protect against bias and confounds?
To try to protect us from experimenter bias, we employ a number of lab systems whenever we can:
Randomization of subject assignment. We try to assign the participant to an experimental condition randomly, so you as the experimenter are not aware of the condition (you are “blind”).
Experimenter are as blind as possible. Even when we can’t fully randomize, we try to keep the experimenter as blind as possible to the language or pattern the participant is being exposed to. For example, we have the participant wear headphones so the experimenter cannot hear (or at least cannot hear clearly) what the stimuli are.
Consistent feedback. When verbal feedback is provided, we require experimenters to deliver the same verbal feedback on every trial. To double check this is the case, we have transcribers and coders explicitly listen and code for feedback so we can analyze whether or not differential feedback was given.
We instruct the experimenter to interfere as little as possible. Even though the experimenter is often in the room, we ask them to deliver the instructions and feedback as written in the protocol, and otherwise to interfere with the experiment as little as possible to reduce the chances of unintentional bias.
We have an explicit protocol to follow. For each experiment, we have an explicit protocol to follow to ensure that each participant is run in exactly the same way. We have checks in place so a supervisor can determine whether or not a protocol was followed.
Transcribers and coders are blind to condition. When possible, we keep our transcribers and coders blind to the experimental condition as well.
What are the consequence of experimenter bias in an experiment?
If we find that experimenter bias has influenced the data we’ve collected, there are a few things we will have to do:
First, we will have to re-evaluate all the data, resulting in substantial work for everyone on the research team (e.g., transcribers and coders may have to re-code and transcribe all data, etc.)
We might decide we have no choice but to exclude some or all of the data collected by an experimenter. This means we would have to collect new, unbiased data to replace it.
In extreme cases, if the bias is discovered after a paper has already been published, we may need to retract a published research paper.
In short: tell someone if you have are at all concerned that data being collected or coded in our lab might be being biased in some way. We are a team and we want to help each other protect against these things. You will never get in trouble for pointing out a suspected bias, nor will the experimenter who may have introduced the bias. We are all human and we make mistakes; the point is to point out mistakes as quickly as we can so we can solve the issue as quickly as possible.
Last updated