Positivist bias in surveys (and rotten questions)

In market research on February 25, 2013 by sdobney Tagged: ,

Market research contains within it a subtle and hidden bias that is rarely mentioned. More often you’ll hear talk of biased panels, or biased questions, but rarely do we consider that market research contains a hidden bias. Someone gives us an opinion to a question that we’ve asked and we assume that that opinion has some meaning/value to the respondent – when actually it could be something they really don’t know, have never thought about, or don’t care about – the questionnaire thus has a positivist (not positive) bias – we make people answer questions about things they might not care about.

As more companies develop and run surveys, often using DIY survey tools, more people in the business are writing questions and questionnaires. A factor that we’re seeing is that questions are being written from the inside-out – the company’s internal view is being reflected in the questions rather than writing the questionnaire from the respondent’s viewpoint. This leads to a hidden bias because it is assumed that the respondent shares the same issues and viewpoint as the business and so has opinions.

As a junior researcher just starting out, the company I was working with was asked to carry out a survey on behalf of a paper-goods manufacturer. It was the early era when environmental concerns were starting to become mainstream purchasing issues and the company wanted to understand how issues like recycling and pollution would play out in the buying of kitchen towels.

Being diligent researchers we constructed a questionnaire, deliberately overlong so as to be able to whittle it down later, and then went out and piloted the survey. In order to properly understand people’s reactions to the questionnaire we were conducting the survey face-to-face with an experienced professional market research interviewer asking the questions. All we had to do was watch, observe and make notes and then, at the end we could ask follow up questions.

To make piloting worthwhile, it is best done with the typical type of people that might be asked to take part when the survey goes live. So we were in among terraced streets in Sheffield. The second person who agreed to take part was a lady, a few years retired, very welcoming and willing to help. So the interviewer started and I watched. The first question was a general warm up of the form ‘What are the most important issues we face today?’ to get some spontaneous assessment of the importance of environmental issues. And the second was a slightly more specific ‘What environmental issues can you think of?’ ‘Environment? I don’t understand what you mean by environment. I don’t know anything about that’ Kerklunk

Now this was a pilot, so we could do something about this, but it is a classic example of positivist bias in surveys – that is not a bias towards positive answers (though this type of yea-saying can happen too) – but a bias towards believing that people have, or should have, an opinion about certain things that are important to the company commissioning the research and that those ‘opinions’ affect what people buy.

A second example from the same era was some work we did for a major worldwide charity looking at why people donated. The charity wanted to include details of all its aid and campaigning programmes. Once again, the piloting showed the details that were important internally were not the factors driving donation.

When designing a questionnaire, obviously there are internal business needs and questions that need to be addressed, but it is very easy to write the questionnaire as if the internal world view is also the world view of the buyers or consumers. This is rarely the case. Great care is needed to capture customers’ views as the customer sees things, not how the internal product managers think.

The positivist bias leaks into surveys in subtle ways. It’s common, for example, to use attitude batteries or image association (which of these brands is…?). A positivist bias forces the respondent to have an answer – eg ‘Which of these brands is energetic?’- you must choose an answer, you can choose none, but implicit is the assumption that you will have an opinion, even if the question has no value or importance to you and it has never crossed your mind. Worse, in some surveys with scales there is no-mid point, only a positive or negative answer is allowed – “Is a company I can do business with?’ Agree or Disagree. What if you’ve never tried?

There are research practioner reasons why you might do this – in particular researchers don’t like a stream of don’t knows or no opinions. It means they have nothing to hang their statistical analysis on, so areas like cluster analysis for segmentation become more problematic. But that shouldln’t hide that forcing someone to register an opinion with something they’ve never thought of is itself introducing a positivist bias. In particular, the views of someone who does have an opinion get mixed up with someone who doesn’t have an opinion, muddying rather than clarifying the data.

It would be better to rediesign the questionnaire to include more things that are relevant, but also to include the ability for the respondent to choose what they have to give answers to. The choice of what to answer and what respondents have an opinion about is part of what you’re trying to research. There are techniques like non-linear questionnaires where seeing what the respondent decides to answer, what they skip and in what order they answer questions are important parts of the research.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: