Articles

Consumers aren’t computers

In customer insight, market research, market research fieldwork, questionnaire design on October 31, 2013 by sdobney

Looking through some data for an academic survey into American citizen’s beliefs in conspiracy theories the other day, it was striking that the academic in question had failed to screen out some odd or strange answers including some deep inconsistencies and some very dubious key pressing patterns. With everything online it’s easy to forget humans provide the answers.

More surprisingly the answers had be processed through some SEM (structural Equation Modelling)  to produce relationships between different survey factors without taking the possibility of erroneous consumer replies. And it didn’t help that the SEM was based on a giant 40-item bank of agree-disagree statements with no option for don’t know, and data was only shown as mean scores.

The first problem with this approach is that consumers aren’t computers. A proportion misread, mis-punch, speed and might not be as conscientious about completing the survey as the person who designed the study, particularly if he/she is getting bored or distracted.

And it’s important to realise that a general population study should give the opportunity for all types of people to answer the survey – that is after all the purpose of a random sample. So a survey designed by a university professor with all his eloquence and verbal dexterity will contain questions that will be answered by at least some people who left school without qualifications, or for whom English is not their first language. So some of that linguistic sophistication will fly over the heads of some of the respondents without great care being taken in the language used.

To give this an added context, at the present, I’m learning a foreign language and in the classroom, where we’re tested on comprehension, it’s surprising how easy it is to make mistakes. Small words – like ‘still’, ‘just’, ‘always’ or complex negatives, the use of pronouns or strange word order easily trip learners up and can equally easily trip up people rushing through a survey.

One solution is to pilot well before a study is launched. But even in the best studies we have to anticipate some element of respondent error. We used to have an estimate that it is about 3-4% in a UK consumer study and possibly as high as 6-8% in the US, perhap due to the higher proportion of non-native English speakers.

And obviously, we want to really capture what people think, do and feel and most respondents genuinely want to give us real information. For this reason, it becomes important that large attitude banks are split up to make them easier to answer. Some logic checks can be built in to catch errors and inconsistencies – good survey software will save both the error and the correction. We always use a generic open end at the end of the survey to capture any issues. And where possible we allow individuals to add comments to questions where something is less well understood. If something is critical, we might ask it twice.

Generally I’m also skeptical of any survey that has an over-reliance on attitude scale questions. Work by Ehrenberg should that although scores are stable across the sample and can be used for market-wide longtitudinal measurement, the same person asked to repeat the same attitude bank some days later will often generate different answers – not necessarily by a lot, but ‘strong’ moves to ‘slightly’ and vice versa and some views will flip-flop.

It’s also the case that the use of scale points differs by respondent and by culture. For instance, scale points sometimes aren’t used as equidistant points by respondents – eg 8, 9, 10 on a 10-point scale are used differently to 3, 4, 5. And on an agree-disagree scale, one respondent might tend to be ‘strongly’ on everything. The next to be ‘slightly’ on everything for example.

The reporting of mean scores on scale banks is also problematic because it can be difficult for marketing managers to interpret and digest. The numbers also play tricks. On a 1 to 10 scale, the halfway point is 5.5 for instance. On a five point scale from 1 to 5, it is easier to get from 3.0 to 3.1 than it is to get from 4.5 to 4.6 for instance. If you do want to report mean scores, then for those with lesser maths skills, it becomes easier to interpret if the mean score is transformed to a scale from 0 to 100, rather than trying to play games with decimal points.

The mean score also shows an artifical number. It doesn’t show people – how many think x, how many will do y. In fact the final failing of an attitude bank is that often it is not a very good predictor of behaviour.

So despite serious academic use in a serious paper, it sort of demonstrates that it can be too easy to jump into complex statistics without really thinking of the people involved what and why their reponses were what they said.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: