On the CBS show ‘Seal Team’ an elite US Navy Seal unit called ‘Team Bravo’ undertake dangerous missions in scary places taking down terrorists. Conveniently there seems to be one such mission every episode for the ‘Frogmen and Door-kickers’ with not too much hanging around. Think ‘Band of Brothers’ meets ‘Zero Dark Thirty’. Working hand in hand with the team are a military Field Intelligence Officer and a CIA agent. They don’t do much of the fighting, shooting and door-kicking but they are on the ground with them and provide Bravo with the intel they need to plan and execute the missions. This intel comes from the many sources they have at their disposal – databases of baddies, satellite reconnaissance, undercover operatives and lots more besides. They pull all this together under pressure, fast, in real time and their intel invariably proves vital to the success of the mission, particularly when the unexpected happens and new plans and ‘exfils’ (getting the hell out) have to be created. They have Team Bravo’s back and there’s strong mutual respect and trust. They hang out together, go drinking together in between missions and – plot spoiler – one of the Seals has a bit of thing for the Field Intelligence Officer. They are close, tight.
And that’s where ‘Market Research’ is heading. Not just integrated with the operational teams, in the fight.
Here is the briefest of history lessons. Market Research began with ‘mass observations’ of the type the US military carried out during the Second World War. They’d send people to observe the troops in the places they were stationed, observe what they were doing, how they were behaving, pull all the results together and draw conclusions on their fighting preparedness and morale. After the war someone had the bright idea to actually ask people rather than just observe and infer, either in small groups or as part of large surveys using the results to inform commercial rather than military decisions and the modern Market Research Industry was born.
I’ll skip very quickly through the next development which was an explosion of techniques and modelling to overcome the problem that people don’t always make good witnesses on their own lives. The problem is you just can’t trust what people tell you or even what you observe.
Picking through all this research was a job for experts and specialist market research agencies. Companies built up market research departments to translate the market information needs of the business into briefs for the agencies and then help convert their debriefs into actionable insight. Insight was the goal even though no-one ever succeeded in pinning down precisely the difference between ‘a finding’ and ‘an insight’. Lots tried, Procter & Gamble and Unilever, both heavy users of market research, developed several ring-binders full of best practice that sought to explain exactly what an insight was. Hell, even I had a go – discerning, actionable, inspiring, er….. a bit like a good idea, you’ll know it when you find it, although the proof of the pudding is in the eating. Something like that. Despite this slight confusion about exactly how to turn market research into insight, so attractive was the idea of insight (or the insight idea) that Market research departments and experts rebranded themselves as the “Customer/Consumer Insight Team”.
Next? The two D’s. The first is obvious – data, lots of data, big data, so big you need data warehouses and algorithms and all manner of human and artificial intelligence to turn data into information. The second D? “Do” – the advance of technology allows you to get lots of data on what people actually do (as well as where they go, where they get their information etc). With data on what they do you don’t need to rely on what they say they do. But you still need insights for data needs to become useful, actionable information.
Step aside market research agencies, and welcome in management consultancies. They love data and are really comfortable using it as long as this data is of the large numeric kind and requires little in the way of imagination to make sense of it but lots in the way of systems and technology. They are not so comfortable with what we used to call ‘qualitative data’, the soft stuff. Not just focus groups (in fact please, not focus groups) but depth interviews, ethnographic (modern term for mass obs) and cultural insights.
A recent report on trends in market research concluded that qual research is making a big comeback helped by technology to a large degree. Video platforms allow you do a lot of qual and do it cost effectively. So, you kind of get qual in quantity if that makes sense.
I am now going to play one of two “I told you so” cards here for anyone that cares. For the last 10 years as ‘big data’ has exploded I have been banging on about the need to use more ‘qual’ research. Put simply, numbers tell you what, they don’t tell you why. Qual tells you why but with no indication on materiality. Ergo, you need both – something I learned as best practice in my alma mater, Unilever, many moons ago.
There was another very interesting conclusion in this report. Two other trends were identified. Firstly, a trend to do more research/insight/qual/quant in-house. The new technologies often have such good CI (customer interface) they allow business to go DIY.
Secondly, the big new thing is empathy apparently. Business and brands need to build customer empathy. As someone who has always struggled to define the difference between sympathy and empathy, I’m not buying this. I can think of a lot of other ways to nail what is going on here without using the words ‘customer empathy’. Understanding, intimacy, affinity, nous. ‘Customer Empathy’ sounds a bit wokish to me, rather obtuse and a distraction from the real issue. For a long time, people have talked about the balance of power between ‘consumers’ (dreadful word) and brands shifting for a whole bunch of reasons but essentially because of technology generally and social media specifically. We can all share our views, review the opinions of others, get the low-down on anything, anywhere, anytime, maybe even become influencers, the arbiters of taste and choice. The ‘consumers’ are people and they are now empowered. If you already thought it was important to research what they do and think it’s a whole lot more important now and – if you know what you’re doing – easier.
What does this tell us about the future of ‘Insight Departments’ and Market Research Agencies? They have no future in their current form.
The Bravo Seal team don’t want just research, the don’t want just insight, they want intel and they want it as an integrated part of their team from people right there alongside them using every valuable source of ‘data’ they can, interpreted with intelligence, speed and commitment for the mission in hand.
Time to play my second and final “I told you so” card. Whereas you only have my word for the first I have documented proof for the second. Back at the end of 1999 the UK Marketing Society asked me to write a piece on the future of marketing in which I made the prediction that in the future Market Research would become Market Intelligence. It is fair to say a) even a broken watch is right twice a day and b) I did not entirely foresee the full impact of technology but even back then the idea of Intel made much more sense to me that Market Research or ‘Consumer Insight’.
What I did not foresee was that the line between qualitative and quantitative would blur. My model, as I have described above, was based on smaller scale qualitative research complimenting quantitative data by adding the ‘why’ to the ‘what’. I could see how video research platforms can reduce cost, extend reach and therefore allow us to do more qualitative research with the tech also facilitating faster analysis and sharing of findings. But the constraint is that qualitative research has to be moderated – someone, hopefully someone smart, has to ask the questions and react immediately to the responses – ask a follow up question, probe here, dig a bit more there. But what if you could break the link by conducting interviews without a moderator? This is sometimes called ‘asynchronous’ research because you do not have to synchronise a respondent with a moderator. The simple version – which is already used extensively in video recruitment interviews – is just to pop up written questions some of which are ‘open-ended’ (have you got a degree in maths? = closed ended question; why did you choose Finance? = open ended question). Great, you can just share the link, the respondent or candidate can record their answers whenever they choose and you can look at the results whenever you want. Without the need to schedule the interviews you can run as many as you want. As a rough rule of thumb you need circa 50 people in a chosen sample group to be able to get statistical significance. Typically, if you were running a study you’d want 3-5 different sample groups (e.g. young/old/users/non-users). So, if you want to run a study that has 150 – 250 interviews that is going to take several moderators and/or a long time. With ‘asynchronous interviews” you could do it in a day or so. But…. there’s always a but…..there are 3 problems with this.
- People respond better (more honestly and fulsomely) to people asking questions compared to an impersonal written question.
- With no moderator you can’t change tack according to the responses – you can’t vary the question set, the order, the follow-ups
- Somebody has to look at all the interviews – it might take just a couple of days to do them but you have to add on the time to analyse them and remember, video cannot be searched unless it is transcribed into words.
All of this, I’m pleased to say can be solved (to a large degree) by technology.
I’m not going to go in to too much detail because I’m currently working on bringing to market a platform that will do just this. But in headlines:-
- You can have humans (recorded) to ask the questions
- Using AI you can vary the question set
- Using advanced sentiment analysis together with human over-sight you can process large volumes of video
This is a game changer that allows marketing and commercial teams to conduct a small-scale piece of video research then move seamlessly to a larger scale sample to provide not just ‘why’ but the statistical materiality. Or the other way around, start with a broad scale study to identify material issues then drop into more depth exploration. It is much more attuned to what product development teams and ‘user experience’ researchers need to do*(see footnote below). It can be run programmatically, it can offer a cost effective way to do ethnographic – in the moment – studies. It can be set up to run continuously, so the voice of the customer is always there, available to tune into for anyone in the business at any time.
This is next-level market intelligence where ‘qual and quant’ are working symbiotically to bring the customer right to the heart of decision-making, right to the heart of the mission. And all other things being equal, the business that’s closer to the market wins. The Seal teams would love it!
This kind of ‘in the moment’ ethnographic research is even more expensive and even more time consuming if done conventionally. It is easier for digital products or experiences because the product development team and their User Researchers can set up tests and intercept the person at the moments they choose. Someone spends too long on one part of the site – PING – up pops a message, “Having problems?”. Someone gives you a low score in a customer survey – PING – up pops an open-ended question, “What could we improve?”.
But talk to the product development teams or marketing folk, and they’ll tell you what they really want is to see and hear the customer. They want video but they want it in the kind of numbers that allow them to understand statistical materiality. It is much easier to analyse numerical or written customer response, handling video is much more of a challenge. This is frustrating because my research* has shown there is a huge difference between what someone will write in answer to a question and what they will tell you face to face.
This is a real example. The intercept question was probing why a potential customer was failing to sign up and download an on-demand shopping app. When they were asked to write down their answer they said:-
“I found the address form complicated”
When the same person was asked to record their answer on video, with the web page live and visible alongside, this is the transcript of what they said:-
“You asked for my address here, but you didn’t make it clear whether I should just enter my post code or the full address. Then over here you had this map that asked me to move the cursor to my exact location. I did this and then without me spotting it you changed my address and I couldn’t see how to correct that. So I was trying to figure out whether I should just give up or put in a second address in the ‘work’ option but it’s not my work, it’s my home”.
If you were the dev team which would you find more interesting and useful? And would you like to know whether this was just one fat-fingered person or a problem lots of people were encountering?
Here is another real example, this time for a new kind of oat milk. The product was getting great customer reviews and in the surveys they were asked to say what they liked and what could be improved. One written response (typical of many) was this:-
“I really like the taste. Maybe the price could be cheaper”
The same person asked to record on video their reaction to trying the product and whether there was anything they’d improve. This is the transcription:-
“It’s kind of smooth and creamy, like milk. You get a hint of oat but it was not too strong like the others I’ve tried. They were watery and very oaty. This was delicious, something you could just drink on its own. If I had any suggestion maybe just a little too sweet.”
Quite a bit better, a lot more insightful.
Now imagine you that you could see and hear them at every stage of the customer journey. To stick with the last example, recruit someone who is interested in, but has not yet bought, plant-based milk alternatives and set them the task to go to a store, look on-line and then choose one to try other than your brand. When they’ve tried it, they then have to try your brand and compare the experience. They capture a minute or two of video at every stage of this process and you can prompt them with questions at any point.
*By the way, I was not telling the truth when I said this is based on my research. It was based on me, my personal experience with an on-demand grocery app and a new oat milk. The findings sounded very plausible though didn’t they? But I did warn you, you can’t trust what people say they do or what they say they think.