Hello, I’m Dr Dr Dr John NA Brown, consulting UX researcher, and this is a UX mystery.
The Case of the NPS Survey Mystery
A few years ago, a “Major Social Network in Silicon Valley” was trying to cope with some really bad press.
They had previously tried a lot of ways to get user feedback, and had settled on an easy one that they hoped would give them good numbers.
They assigned a team to create a Net Promotor Score survey. That starts with a simple positive statement about the company. Customers are then asked to rank it from 1-5 or from 1 to 10, based on how strongly they believe it. The idea is that, by getting the right people to tell you how they feel about your company, you can find out which of your users are out there promoting your company all on their own.
This social network had been serving these pop-up surveys to random people using the platform for years. Wherever you were in the world, whatever you were doing on the platform, there was a chance that a single-question survey would appear on your screen asking you to click on the number from one to five that best describes how you feel about the social network. You might have seen one yourself. Now, after a few years of collecting data and faced with a strong need to measure and improve popular opinion, the social network did two things.
First, they set a team of data scientists to look for recurring patterns in the data they had collected.
Second, they sent out international teams of researchers to interview the folks who ranked them most highly two out of three times.
The notion was that the numbers would show them the underlying pattern behind how people’s opinions change from being a solid supporter of the product to being an intense supporter – from a four out of five to a five out of five.
That’s what they wanted from the numbers. And they expected that the people being interviewed would tell them just exactly what it was, which specific experience it was, that had made them take that step up, that had made them become a ‘promotor.’
Well, the number-crunchers couldn’t uncover any meaningful pattern in the data.
What the interviewers found was that only a handful of people even remembered ever taking the survey, and virtually none of them agreed with the scores they had on record.
They hired me to find out why.
As is often the case, the main answer was a simple one.
In just a moment I’ll tell you what it was. If you’d like, you can pause this playback now, and try to figure out what went wrong for them. It was more than one thing, and I provide a better breakdown of them all in my book, but why don’t you take a crack at figuring out what the single biggest problem was? I promise, all the clues you need are in the story you’ve just heard.
When you’re ready, just start this up again.
--
Welcome back. Did you catch that the problem was in the way the initial survey was delivered?
It popped up on the screen of people who were busy doing other things.
What do you do when an unanticipated pop-up interrupts you? You close it, right? You do that almost reflexively – with as little thought as possible – because you’re thinking about other things. The survey didn’t have an obvious “close” or “not now” or “opt out” option.
So most folks closed it by clicking on a number. Any number. The survey they’d spent years developing and conducting and interpreting and investigating had been collecting random numbers instead of opinions all along.
Thanks for listening. I hope you enjoyed this UX mystery, and that you’ll come back for more.