Statuses

How big is Big Data?

In Uncategorized on November 24, 2014 by sdobney

In a seminar on Big Data in Barcelona this week, a consultant from Ernst and Young (EY) Consulting suprised me by suggesting that 43% of Big Data projects involved less than 10,000 records running up to 71% of Big Data projects that used less than 1 million records. This was a surprise as my assumption of Big Data was that it was really N-million record type projects.

Handling less that 30,000 records is something that can almost be done by hand and is what would be fairly basic established Database Analysis (we often prep-customer lists of this size with some analysis prior to sampling) – looking to find segments, value-based analysis and outcome based modelling.

Larger than this and much more automation is required to prep the data. And certainly when you get to the millions of records much more horsepower is needed for the analysis. But I do know of analysis houses that were analysing 1 million plus records in the late 1990s so even one million records seems far from Big Data that is talked about.

Of course one difference now to the late 1990s is that the depth of information per customer has grown enormously. Customer journeys can be tracked, as can purchase histories and other forms of communication which gives a much richer picture of what it helping win sales, as well as what’s not working. Propensity models can be built using potentially hundreds of variables and there is an expectation that given all this data we must be able to predict and therefore influence customer behaviour – for instance micro-segmenting and experience customisation.

However, another way of looking at Big Data is that it gives businesses a very fine test bed for running experiments. The modelling will say what the customers did in the past, an experiment tests how they respond to a change – in other words the experiment is forwards looking and the modelling backwards looking. With Big Data, it becomes possible to run hundreds of small experiments to test what works before rolling the successful tests out more widely. This type of test-and-observe approach lends itself to algorithmic marketing – that is using algorithms and automatic-learning for a marketing system to optimise a combination of marketing elements (eg colour, text, images, timing, format, price, discount) across a customer base.

The risk with all the Big Data approaches though is that the data takes the place of the customer in corporate thinking. The customer has views, opinions, needs and desires and these are rarely captured in data. The customer also responds to a human message and human-to-human communication. Some of the bigger very data-literate companies in the world have seen sales and marketing set-backs despite all the data that they have, partly because they looked at the data, rather than listen to the customer.

And sometimes, the systems don’t work as planned and end up annoying the customer and breaking a chain of trust. For example, Movistar, a large telecoms company, has my email address on their lists and they started sending me offers with aggressive pricing. However, I’m already a customer of Movistar and the prices they were offering were substantially better than I was paying. Obviously I’d want to switch to the lower tariff. Ahh! The new tariff is for new customers only, so I can’t switch. So in a short while I won’t be a customer of Movistar. Big Data, but poorly used.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: