The Dirty Little Secret of Research Bias

Most marketing researchers would deny that little if any bias is introduced into the methodologies underlying the results the client receives. Common researcher responses are, “How could there be any validity to our information were there to be bias, how could we stay in business if we did not deliver the truth?” These statements refer, of course, to the normal way commercial market research is solicited, proposed, and conducted.

The standard research practice starts with interviewing potential research vendors, describing the behavior under study, any hypotheses for the behavior, what the report should look like and any cost and timing constraints. Each vendor proposal is then closely scrutinized to determine whether the research design would allow the available hypotheses to be confirmed or denied and if the cost and timing constraints are met. When these criteria are satisfied, agreement to the vendor’s proposal is given and the study begins. The vendor then writes the specific respondent screening requirements, qualitative discussion guide or quantitative questionnaire and submits them for client approval. Again, the client evaluates these materials against the criteria of “Are the ‘right’ questions being asked of the ‘right’ respondents to tell us whether our hypotheses are ‘right?’” In fairness to most clients, once the proposal is approved, these are the only ways to ensure the needs of marketing management are being met by the vendor they helped select. In other words, client researcher, research vendor and marketing management have now agreed to look for the same set of results with a methodology most likely to obtain them. Is this a biasing research process? Isn’t the purpose of research to improve the effectiveness of marketing? Is not getting smarter how we do this? Isn’t that the goal of every researcher?

The answer to these questions is linked to the client’s behavior which occurs when unexpected results are delivered. Based on my nearly fifty years of experience, it is more likely that an unexpected result produces an indictment of the vendor rather than an attaboy. The client researcher often shares the blame through “insufficient vendor oversight.”

This suggests that the value of a study is highest when it confirms a belief about a conclusion reached before results are delivered and lowest if they are not. This is the same definition of Confirmation Bias in behavioral economics. This suggests that the results of a study are “shaped” in advance to fit an expected outcome or set of outcomes so that maximum utility/value are obtained. If this is so, how does this shaping, or bias occur? There are at least five sources for which we have evidence, as follow:

Direct Influence — ̶ Subtle hints such as “This is what we think is happening, we believe this to be the real issue, a lot of firms haven’t ‘worked out’ for us, we all need to be in ‘alignment’” are examples we have heard. They are not-too-subtle variants of the direct message to deliver what is expected.

Respondent Qualifications — Client insistence in the form of “We always screen on this characteristic” and “Only these are our key targets” and “This is who we always talk to” are stratagems we have heard designed to deliver confirmation of an expected outcome.

Question/Topic Content and Order — Using material which itself could be adopted by respondents as the basis of the results to be confirmed is a sure way to make it happen. For example, client suggestions such as “This is how we always ask this” and “These questions are the ones we always use” are dead-giveaways that the client wants confirmation of something they already have in mind, thereby reducing the chance of an unexpected “surprise” result.

Analysis Regimen — Specifying the analytic tools are another indication this the results are being “shaped.” For example, we have heard, “Why don’t you apply regression here” or “Tests are what we usually use” and “Demographics have never ‘worked’ for us.” Each of these are attempts to narrow the range of possible results so that the likelihood of an unexpected result is minimized.

Limited Audience — Keeping the vendor away from those who will ultimately make decisions based on the data is the last resort for those who wish to “game” the study result. Feedback such as “There is no need for you (vendor) to present” and “We have a special format for communicating results” keep the vendor from “misspeaking” and suggesting an unexpected result.

If you would like to identify the bias-free and truly predictive variables that control demand for your product or service, contact us.

Tim Gohmann Co-founder and Chief Science Officer 805.405.5420 | tim@behavioralsciencelab.com

BEHAVIORAL SCIENCE LAB, LLC 500 WEST SECOND STREET, 19TH FLOOR, AUSTIN, TEXAS 78701 USA www.behavioralsciencelab.com

Tim Gohmann, Ph.D.