The Improved New User Experience That Wasn’t

Here’s the humbling thing that you’ve got to keep in mind as a user researcher and user experience designer.

Qualitative data is helpful, and your years of experience are helpful, and your understanding of design principles are helpful, but they will always be trumped by quantitative data on how people are actually behaving.

I have seen many times where qualitative feedback from users, or our strong intuition as excellent product managers and designers, has led us down a path that was wrong.

Here’s a recent example from Yammer: our signup process: we recently tested a simplified version with 2 steps vs. the original 4-step process. I was eager to see this change launched, because I hated our signup process.

And by every usability heuristic measures, the 2-step version was superior. It was faster, more approachable, got people into the app faster. The design was more polished and we’d had an actual copywriter crafting the words.

It performed extremely well in user testing and customers gave us highly positive feedback.

But in actual real-world usage, it was actually a bust.

When we A/B tested it, we found that we achieved only a small increase in successful signup completions — and that was overwhelmed by a larger decrease in retention.

People who went through the shorter signup were *less likely* to actually return and use our service.

And this is where you need not only data but organizational support for data- and behavior-driven development. Without it, some exec would’ve swooped and demanded that we release the change anyways.

Probably would’ve suggested that “your data must be flawed” or “maybe these were just unusually dumb users” or “well, this contradicts with my (unarticulated) vision, so forget the test results”. (I’ve heard all of these before.) Luckily, we have the discipline to listen to what our customers are telling us through their actions: the experiment was rolled back, to be iterated upon another day.