Articles

Musings from Sawtooth/Skim European Conjoint Conference

In analysis, conjoint analysis, market research, pricing, product design, questionnaire design on April 15, 2013 by sdobney

The Sawtooth/Skim European Conjoint Conference took place in Berlin last week with papers and advice on new or best practice in conjoint analysis, hierarchical bayes analysis and related techniques. Rather than trying to follow up on any single paper, these are more musings about how conjoint is changing (or needs to change) in a world of big sample studies and changes to research  in general. For instance, I was struck by the relative simplification of conjoint studies in terms of attributes and levels as presented (though not at a statistical level), while more surveys are being run with extremely large sample sizes. Obviously there are moves to newer forms of conjoint (ACBC, Menu-based etc). But that left me wondering if some radical changes to the way we use conjoint are coming/needed…

The main papers came from the large research houses bring a lot of focus on brand-price studies or shelf-based display type studies where the aim is to understand price elasticity and pricing effects for a number of SKUs (stock keeping units) in a category in order to be able to optimise the bundle of products and prices in the market. As there is a great deal of retail audit type data showing real behaviour, these conjoint models can be calibrated and tested against real purchasing behaviour. Consequently they require a large amount of statistical expertise in their design and in the subsequent analysis but at an attribute level they are relatively simple. The core questions are often how do you get more SKUs into the design and how do you optimise across the category (eg TURF analysis),r ather than how do you look at more attributes or more interactions within attributes.

The relative simplicity of the design in terms of a product attribute each with a price attribute is in contrast to some of the concern and issues around conjoint from a decade or more ago, where the focus was more on how to manage conjoint with more attribute and more levels and in more complicated arrangements of options. Obviously a conference isn’t necessarily the best place to try to pick out market trends, but it seems that while more businesses are doing conjoint-based studies, from what people were discussing, studies themselves are becoming relatively simple in terms of attribute and level design. Menu-based designs may be a different direction, but I’m not sure how mainstream these are.

Some of the reason for the general simplification comes down to the design challenge for large attribute studies. There is a relatively large cost in terms of time and effort to develop a good set of conjoint attributes and levels for complex products. Larger studies are also more complex to field and can require longer questionnaires, or more complex strategies such as partial profile type conjoints. Instead, it seems that these larger scale conjoint designs are being replaced more by max-diff type exercises where the list of options is longer, but no longer in a full profile-based trade-off. Given that ten or fifteen years ago, we would spend a great deal of time on finding, designing attributes and levels, it left me wondering if conjoint design is becoming relatively formulaic – for example some pharma conjoint research with the trade off between efficacy, safety and dosing. However, this streamlining of the design potentially runs against the potential benefit of the second trend covered at the conference –  increasing sample sizes.

Whereas much conjoint work is done with regular-type sample sizes of a few hundred to a couple of thousand responses, large consumer level studies were mentioned as running to 10,000 or 30,000 or even 1 million respondents. Large sample sizes are in vogue because the cost of fieldwork has come down dramatically with the advent of internet and mobile surveys. But as old research hands will point out, in terms of pure accuracy there is a diminishing return with size – quadruple the sample size is needed to get double the accuracy. So if a sample is 1,000 is accurate to say 4% for a particular measure, a sample of 4000 would be accurate to 2% or a sample of 16,000 would be accurate to 1%. (though with other design considerations like how well a question reflects reality, the survey accuracy may be limited by other things).

So why would you want large sample sizes? There are really two big reasons. Firstly is the ability to drill down to smaller and more pertinent subgroups. In general terms (for discussion at least), a sample of 1000 would split into about 5-10 subgroups for analysis – eg 10 regions or 5-7 age-bands, or 10 purchase volume deciles. The larger sample size would enable differences between groups to be identified – each with say 100 respondents. Obviously with larger sample sizes finer distinctions can be made between groups, or more groups can be compared.

As marketing is much more multi-media in nature or more locally based through the use of things like events, or the use of specialist TV or internet channels, this larger sample size these larger sample sizes therefore provides greater resolution to investigate the marketing effects. In a pre- and post- testing of 1,000 per wave that might mean significance change being measured at +4% (say). If the population in question is 50 million people though, this 4% represents 2 million people. In other words to see a significant difference, the marketing would have to be sufficiently powerful to affect at least 2 million people in order for it to be measurable with the sample size used.

Large sample sizes however, give greater resolution. They enable the effects of smaller campaigns to be isolated and identified. That means more tightly defined and less general marketing campaign effects can be isolated. For instance in a multimedia campaign, rather than just investigate the overall it may be possible to analyse people who have received poster+radio versus people who have received radio alone, or to look at more focused marketing campaigns such as event sponsorship where audience levels are potentially too small to be picked up in a 1000 person sample.

In the conference the example was given of identifying small groups of buyers of particular brands and models so their particular needs can be identified and analysed without getting lost either in the wash of the overall average or because they can’t be resolved within a small sample.

However there is a downside. If, in a sample of 1000, we might expect to have 10 subgroups to analyse (or several lots of 5-10 subgroups) we can cut and cross-tab and correlate within each of those 10 groups and analytically it’s relatively straightforward. However, what happens when we have a sample size of 30,000? Using the same 10 subgroups per thousand, potentially we have sample resolution to investigate 300 groups for cross-analysis and checking. How do we extract meaning from that level of cross-comparison? And how do we deal with the problem that at a 95% confidence level from those 300 groups we would expect 15 (5%) of them to be different just by chance?

It’s not just analysis either. Typically a 1000 sample would be cut into say 5-6 clusters towards a segmentation. Businesses can manage 5-6 groups – each with a different product, execution, promotional effort, targets and returns. How would you manage selling or marketing to 300 different groups or 300 different clusters? Do you have 300 market sector manager to cope with each one? This means the possibility of some big changes in how analysis is carried out and reported, and how the analysis is tied back to action to make use of these levels of drill-down. Obviously there will be those in the direct marketing, and web-analytics world very comfortable with automated response-based marketing, but for indirect sellers the challenges are bigger. How do you make sense of this type of data? There have always been large sample studies around – TGI or the National Readership Survey but no-one would ever consider trying to present or analyse the whole dataset in one go.

The second thought about large sample sizes is that for an overall project – ie where we’re mainly interested in the broad overall picture – the large sample size itself isn’t giving us that much – particularly for a simple conjoint study as was being discussed above. Why go with a bigger sample size if the objective is just an overall picture or measurement?

The answer is that larger sample sizes enable us to run bigger  research questionnaires and bigger conjoint projects – for instance including more products, but then using consideration sets (another paper at the conference) to reduce down an individual’s set of choices so as to avoid overwhelming them. Obviously, we can’t design a big questionnaire and expect everyone to answer hundreds and hundreds of questions, but we can spread those questions across the sample base so each individual gets something short and to the point, while using the benefits of the larger sample size to link answers together across the sample as a whole.

Which then comes back to the questionnaire design. With larger sample sizes we can afford to have larger questionnaires and larger more sophisticated survey designs which are then ‘spread’ among the available sample – that is individuals answer a subset of the questions, but the overall survey is bigger. We could build back via conjoint-type techniques such as hierarchical bayes. Or/And we could use a concept that we use is of ‘non-linear questionnaires’ where the respondent can answer as much or as little of a questionnaire as they feel is relevant.

To place this into a customer satisfaction situation (another paper at the conference), rather than capture brand impressions from everyone, for all brands and so have a weight of non-buyers or marginally interested respondents assessments dominating the satisfaction scores – making the csat scores difficult to use to pick out details of issues or problems because of the average non-user views dominate. Instead we have a questionnaire where people answer the questions they find relevant and pertinent and within a large sample size there would be sufficient coverage and more accurate coverage of the key measurements – avoiding the positivist bias in surveys (ie we force or demand an opinion even where there isn’t one, or fall to distinguish between people with strong opinions, from those with opinions that are only formed because we asked them).

To follow this through, we would have the possibility of much larger and much richer conjoint designs.This always begs the question how big is too big, and what can consumers or buyers deal with. If we look at say shopping aggregator sites for products as diverse as houses, cars or insurance, our conjoint designs have relatively few attributes and only ask for a selection from a small set of products (3-5 for product design, 15-20 for brand-price work). An aggregator will often be showing 15+ attributes (more if the click for more info is included), and in lists of 25, 50 or even 100 products long, but the aggregator adds tools such as search and filter to help guide decisions – this also would give analytical information about choices. So there is then an argument for much bigger and richer conjoint-based designs – but…

…And the but is, the design cost of a complex conjoint project. Defining the attributes and levels with care and attention to detail is time consuming even for mid-sized conjoint with 7-8 attributes. So this would suggest there is value in focusing attention on how much the attributes and levels can be generated automatically, even if that means that they are fuzzier and less easy to separate (eg on colour: one level cream another magnolia) but we look at ways in which respondents can identify similarities and differences and, with a big attribute and level set pulled say from the aggregators, or from search terms, whether we can cluster or combine similar levels post-field to fit into the analysis, rather than over-agonising at the start?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: