In his LinkedIn post on November 29, Thomas W. laid out a handful of arguments a designer or research could use to object to demands that UX “prove its value.” It feels good to read the list, but I don’t recommend following his advice. I’ve used arguments like this before and heard the objections. In most cases the arguments are too high-level to meet the business where it is trying to operate, i.e. the points are a bit askew for a company hoping to change its business results in the near-term.
He lists these points. For each I mention the typical objection:
- “72% of businesses claim that improving customer experience (CX) is their #1 priority today.” – irrelevant
- “80% of CEOs claim their customers’ experiences are superior, while only 8% of their customers think so.” – reflects the Dunning-Kreuger effect among those other dunces
- “64% of people think that customer experience is more important than price in their choice of brand. (Gartner)” – we’ve been successful competing on price, too high-level to be actionable, is this for consumer, is it true in our industry
“Companies that excel at their customer experience grow revenues 4-8% above their market (Bain)” – too high-level to be actionable, is this for consumer, is it true in our industry, which improvements matter
- “$370 MM is the average amount of revenue generated by a modest improvement in Customer Experience over 3 years for a $1Billion company. (TemkinGroup)” – how much is modest, which improvements mattered, we are not in this cohort of companies
- “Superior CX creates stronger loyalty turning customer into promoters with a LTV of 6-14X that of detractors (Bain)” – we spend a lot on CSMs as it is, are we already reaping this benefit, if so it’s not enough
- “89% of consumers cite customer experience as a critical loyalty Builder. (eConsultancy)” – correlative, sure but what’s the effect on revenue
- “92% of customers who rates their experience as Good were likely to repurchase from that company compared to 9% of customers who rated their experience as very poor. (TempkinGroup)” – we’re already in the good category, is this true for our inductry, is this true for businesses like ours, and we’re B2C so it’s not relevant anhyow
- “Experience Led business have 1.7 higher customer retention, 1.9x return on spend and 1.6x higher customer satisfaction. (Forrester)” – than what, is this for consumer, is this true in our industry, what does it mean to be “experience-led” and is that even a sensible thing for us to consider given where we are and how we work
- “Brands with strong omni-channel engagement strategies retain an average of 89% of their customers (Aberdeen Group)” – we have good retention without “strong omni-channel engagement strategies” whatever that means
- “Consumers with an emotional connection to a brand have 306% higher lifetime value and stay with a brand for an average of 5.1 years. (Motista)” – consumer, not for our industry, we’re not in the emotion business, how does this apply to us specifically
- “Organizations classifying themselves as advanced at CX are 3x more likely to have exceeded their goals (Adobe Analytics)” – self-reported, correlative, and indirect
- “86% of customers have stopped doing business with a company after a single negative customer experience. (Harris Interactive)” – this is for consumer, we don’t have a lot of direct customer interaction, we have projects to reduce the need for costly call center interactions, etc.
The common thread among these objections is, in essence “how does this high-level correlation apply to us, in our industry and situation, and guide our thinking now, in the near-term?” And that’s sensible. A company dissatisfied with its results wants to change something pronto and wants to choose that thing with some assurance that it will work.
The worst part, though is the last part, the part that will have a lot of UX and CX people cheering, the part that feels the best:
- “Now go ask your CTO or PM to show you metrics on the value of their code stack. Or their shitty MVP. Or their roadmap of fake metrics, costs and delivery dates. Ask to see where the actual value in ceremonies and sprints is. Ask them to show you how failing at 95% of the time is profitable to the business. Ask them to show you the value in terrible useless apps like Jira, Confluence and GitHub. Ask them to show you how democratized research and crowd sourced discovery and Qualitative is profitable.”
If I were to uncork this in a leadership meeting it would (rightly) be dismissed as snarky and combative. “Ha ha, you suck too” is not going to win anyone over.
There seems to be broad agreement within engineering leadership that MVP is (or should be) a philosophy of experimentation and hypothesis testing. An MVP should seek to validate a hypothesis. Literature discusses a Minimum Viable Product as the cheapest and fastest possible experiment that can answer a business question.
Yet our cross-functional teams seem often to be treating MVP as meaning “the first release” or, worse, “the first kind of quick get-it-out-there release” of a feature, improvement, or change. Some passion projects make it to general availability without cross-functional attention. Still other items wind up in the product with a “beta” flag and are not revisited. And rarely is data collected from these to determine if they are successful. We console ourselves with the idea that these are experimental but we often don’t behave as if we are actually experimenting. So we aren’t in these cases fulfilling the idea that an MVP is intended to collect validated learning about customers with minimal effort. What’s worse, we infrequently return to these releases to improve them, withdraw them, or build upon them.
The ultimate effect of the above is that there are items we call experiments, half-baked, scattered around the platform, and we have little understanding of their fitness to task for our customers and users. As a result
- Things that should be either deprecated or improved lead to an incoherent and unusable experience for our users, making demos (sales) more difficult and depressing user satisfaction (which can contribute to churn)
- The product has inconsistent interaction paradigms, styling, labeling, and messaging, which enhance the perception of poor usability even when things are sufficiently usable
We say we are “shipping to learn” but we are not doing the work needed to actually learn.
- Improve the effectiveness of our live software experiments
- Raise the level of quality visible to users
- Manage downward the overall level of technical and interactive debt visible to users
- Improve teamwork in part by firming up our working definitions of important terms such as MVP, alpha, beta, etc.
- Consider not using the term MVP – it has become so distorted in its use that it lacks useful meaning in practice
Make experiments experiments again by:
- Carefully selecting projects for live experimentation according to
- Limited scope
- Clearly-articulated hypothesis
- Pre-determine the success metrics and decision date for any experiment
- Expose a limited set of customers to an experimental release, producing a basis for comparison (limited release customers vs the rest of the population)
- At the appointed time, on the basis of the agreed-upon metrics, decide to do one of
- Withdraw the experiment
- Iterate on the experiment
- Prepare for general availability/transition to regular feature development process
- Slight additional effort to plan experiments and evaluate the results
- Additional cost to instrument MVPs so they can be evaluated
- Cost in technically and interactively hardening experiments that succeed (should happen, doesn’t always at the moment)
- Slight additional effort to withdraw experiments that fail
- Reduced technical and interactive debt due to each experiment having an end date and being either withdrawn or hardened
- Reduced waste releasing fully baked or hardened projects that don’t meet customer needs
- Improved interactive quality of items that make it to general availability may lead indirectly to less churn, greater CSAT, improved quality visible to users
- For new ideas
- Gain broad agreement on the definition of an experiment
- Offer guidelines for when to run a software experiment live or to choose other means of experimentation
- Offer guidelines for running an experiment
- Pilot by
- Selecting a hypothesis and means of testing it
- Setting date and criteria for evaluation
- Instrumenting, launching experiment, and collecting data
- Evaluating the results at the appointed time and making the withdraw/iterate/prepare decision, creating a new project if needed
- Review feedback and results from pilot
- Share best practices/expectations with department
- For old ideas (fits with our objective to deprecate crufty and unused things)
- Offer items to address – what features seem to be experiments that were not evaluated, that are suspect?
- How do we know if this is doing what it should?
- We know what result it should produce – measure that
- It’s doing well – are we happy with the quality?
- Yes – yahtzee
- No – remedial “prepare for general availability”
- It’s doing OK – iterate on the experiment
- It’s not doing well
- Is that strategically relevant?
- Yes – iterate on the experiment
- No – candidate for withdrawal
- We’re not sure what result it should produce
- Is it being used?
- Yes – how and why
- No – candidate for withdrawal
Things we need to teach/encourage/expect/insist on
- Working from hypotheses and measures
- Feed the innovation pipeline with clarity on customer problems we are interested in solving
- Consider examples at various sizes/complexities to break down into experiments
- Need a company-wide framework to help us consider ideas for experimentation, from the customer problem/jtbd/benefit
- Raise the level of direct product use knowledge/experience among engineers and designers – better have operated the thing you are working on
In spite of the organization’s urges to snap back to old ways (ways that got us to where we are, so are not sufficient on their own to change our results):
- My people are not overreacting to the politics…
- …assisted by their work in making us more customer-centric being shouted-out in public forums by the CEO…
- …who is also publicly mentioning themes that have been part of my mission at the company since I was hired.
This all is setting me up perfectly to talk about quality and how the product needs to change (i.e. what we need to organize ourselves to produce) at the offsite after Thanksgiving.
- I’m in the interview panel for the new product leader for a key product.
I forgot that this week ended with Veterans Day. It’s awesome to have an unexpected day off, which is really a day to get a handful of other things done. And I did – I closed the to-do list for the day and most of tomorrow, had a lovely lunch with family, and the weekend is just beginning.
Meanwhile the confusion and “should”ing at work continues, but we have a leadership offsite coming and this situation has made my agenda clear for that meeting: “what do we mean by quality” and “what am I here to do.” Smaller topics can be set aside for now.
Things are a bit of a mess at work – a couple of key people have resigned, the 4th quarter roadmap is in turmoil, revenue is going up but there’s still plenty of ground to make up, and a recent launch and post-mortem has raised a lot of feelings and inspired a lot of shoulding among the leadership. (Folks should know not to should on themselves or others.) Even so,
- That fraught project and launch, the one that has caused a lot of teeth to be gnashed, is getting good feedback and excitement form actual customers, and so far few bugs have been reported.
- The roadmap, departures, and should situation present an opportunity (that I am happy to seize) to push us into more user-centricity and and agreement on quality, if only we can dispel some of the persistent misconceptions about the project triggering some of this swirl. There’s a leadership offsite coming up that I’m all too happy to throw a couple of thought-bombs into.
- My team is being surprisingly even-keeled about the whole thing. I’m so grateful!
The first time I rode a motorcycle I was on the back, clinging to my college roommate. He happened to have a second helmet, it fit well enough, and I was eager to get to the other side of campus.
He gave me two instructions:
- “Keep your feet on the pegs.”
- “I am not a steering wheel.”
Can you guess which instruction he complained about at the end of the ride?
Here’s a hint – it’s easier for a not-already-knowledgeable person to follow a positively-worded instruction (do this) than a negatively-worded instruction (don’t do that). It’s even harder to follow an instruction when it relies on a metaphor, as it’s less clear, less obvious, less instructive. The combination of negatively-worded and unclear is worse yet.
I should have asked clarifying questions, like “what would it feel like if I was treating you as a steering wheel?” but I didn’t think to at the time.
At work we just did a retro on a somewhat fraught and over-large project, and much of the raw conclusions are negatively-worded. Some are metaphorical. The people involved are knowledgeable but from different disciplines, so the level of shared understanding is probably lower than people guess. So a lot of “don’t do X, don’t do Y” will probably not get the results we seek. I’ll be helping to bend these into positively-worded instructions today. I suspect our success will depend on it.
I’m trying to coach some designers along the path of feeling comfortable adjusting and evolving approaches that have been learned in school (vs believing that there is a single “right” way and that design quality is aligned to how closely they execute against that “textbook” approach). I’d like to be able to share something with them that demonstrates that the higher one’s design maturity, the more comfortable/ confident one is with adjusting approaches and trying new things based on context and experience… and that this is a good thing.
I don’t have a framework or model to point to, but the thing that strikes me as interesting about this question is
design quality is aligned to how closely they execute against that “textbook” approach
It might be worth pointing out that this is an inward-looking, appeal-to-authority view of quality, measured in the wrong place. Design quality is actually measured by the attainment of user ease and satisfaction coupled with business results, and these do not depend on method adherence. The methods exist to help you get the information you need to achieve these results but they do not deliver these results themselves.
A decidedly ☯️ week, with each ⬇️ paired with an ⬆️:
- During a tough retro on a key project the team
- ⬇️ expressed a lot of frustration with new process tweaks, an unfamiliar level of design involvement, conflicting wishes from the team, unhappiness with the overall shape of the project (though this was known from the beginning), but
- ⬆️ was careful not to throw blame to any function or person, and acknowledged the negative effects persistent stakeholder misconceptions had on how the project progressed. THIS we can work with!
- Regarding some of those stakeholders
- ⬇️ third- and fifth-hand feedback, amplified by loose talk and seniority, was brought to me as potentially damning, but
- ⬆️ in general these stakeholders were open to feedback and clarification themselves and learned from our interaction. THIS we can work with!