Beta is not a release type

Beta is not a release type. Thinking of beta as a release type, essentially a quality level, leads teams to take something not ready for prime time, ship it to everyone, and call it “beta” to excuse its deficiencies. If you are making excuses for the deficiencies of your product you are probably doing it wrong, and it certainly doesn’t feel good to do so.

Beta is a testing period. The purpose of beta testing is to take a product that you suspect might be ready for release and expose it to real users that will break it in ways you didn’t think to test for. That implies a few things:

  • These users are testing the product, so you have things you would like them to do to make sure they are exercising the product fully. Therefore you have a testing plan.
  • You want these users to be active in testing the product, so you have selected them and are actively managing their activity and feedback. Therefore this is not a passive exercise.
  • Since you are actively testing the product, you are able to score and count the issues your testers find. The product can prove whether or not it is ready for general release. Therefore you have release criteria. and if you are doing it right you have one or more dates on which you will examine these criteria to determine if the product is ready for release.

In short, since beta is testing, you have a plan, participants, success criteria, and a decision point. If not, you’re just trying to hide the fact that you don’t care about quality as much as your users will.

There is no “normal” week or day

At seventeen I joined the staff of a Boy Scout camp, and as an activity area leader I had a role to play in leadership, instruction, and the interactive programming that occurred each week. Each day had something special happening that made the schedule differ from one day to the next, making preparation, instruction, and even rest difficult. With scouts in camp for just a week at a time the strategy to engage them was clearly to make sure there was always a competition, series of skits, cookout, campfire show, ceremony, or recreational extremity (optional, happily: run off the end of the high pier! race to the far rock and back in an overloaded canoe! swim two miles in frigid water!) to draw attention and raise the available hype.

No one day was “normal.” There was no opportunity to settle into a routine, so there was lots of opportunity for scouts, generally between 11 and 16 years of age, to wind up in the wrong place, unsure of where they were supposed to be and how that matched up with what they wanted to be doing, swimming in options and unable to settle into a helpful routine.

It seems the same is true for knowledge workers – above a certain flavor of entry-level position, it’s difficult to settle into a daily or weekly routine that would foster comfort in one’s competence. It only gets worse as you enter into management or become a senior leader; you are constantly adapting, adjusting, re-focusing, trying and abandoning directions.

I’ve learned not to lament the lack of routine. This is the job. But I do try to create it for the people I support who need it.

ADPList asks: What are some tips and advice when going through a technical interview as a UX Designer?

For me the centerpiece of a design exercise is seeking knowledge and adapting your design ideas to that knowledge. You create and adapt and discard ideas readily and easily if they don’t fit the emerging situation. So it is essential to ask questions during the exercise to understand as much as you possibly can about the people, environment, stakeholders, goals, and constraints so that you can demonstrate this process.

You don’t want the entire exercise to be taken up with this “research” activity – you do need to design something. So consider offering a very rough concept and then asking questions that will lead you either deeper into or away from that concept.

Example – imagine Home Depot had a magical way of printing and delivering orange aprons to new employees, and employee apron customization/self-expression is culturally important. What ideas does this spark? What do you want to know to confirm or discard some of these ideas?

My thoughts immediately go to a part of the new employee offer process that leads people to a website where they can customize an apron at their leisure. But these are hourly employees – the store manager is not going to want ANY work activity to be done outside of the store or in off hours. So the kernel of that idea needs to come in-store. Come to find out that they’d also like to offer this capability to existing employees. What is the technical environment in-store? How much time is the store manager willing to have an employee spend on this while on the clock? What are the required elements of apron customization and what are the optional ones? Etc. Understanding these you can sketch out a process that quickly gets the basics taken care of (perhaps in ONE or ZERO steps) and offers any options easily and quickly.

In a design exercise you’d do this all narratively – talk about the core idea, check it against their answers to your questions, change the core idea, sketch a little, ask more questions, adapt to the answers, etc.

I don’t offer a “UX process”

My current employer is much like others in that it has a product management process, an engineering process, a design process, a customer onboarding process, a customer support process, etc.

What do you notice? That’s right – each discipline group has a separate process. But what is it we are trying to ship?

If you turn on a machine, and hammers come out, it’s a machine designed to make hammers.

Geoffrey Canada, Harlem Children’s Zone (paraphrased from memory)

We’re trying to ship a problem-solving, efficient, coherent, usable, pleasant, and effective piece of software. So our process, the design of our organization, needs to be arranged such that this is what the machine produces. We’re not trying to ship a little bit of engineering, a little bit of design, a little bit of support, and a little bit of product management all shaken up in a bag.

So I don’t offer a UX process. I talk about what the product development team needs to do informationally to get from the customer need to the satisfaction of that need. You’ve heard this before from me:

  • Benefit: What benefit do our customers need? What problem are we solving?
  • Concept: What concepts can we come up with to deliver that benefit? Which concept should we deliver? How should it work?
  • Detail: How should it work specifically? As we work on this are we maintaining or improving usability, intelligibility, functionality, appeal?
  • In Use: Are users successful in using it? How might we help them be more successful?

You’ll note that “In Use” folds right back into “Benefit” and the cycle continues.

The specifics of each informational phase might be organization-specific, but you’ll need to harness all of the faculties of the product development organization, including (but not limited to) product management, design, engineering, customers, and users to do a good job in each phase. So this should be a single, integrated process.

Thomas W. writes…

In his LinkedIn post on November 29, Thomas W. laid out a handful of arguments a designer or research could use to object to demands that UX “prove its value.” It feels good to read the list, but I don’t recommend following his advice. I’ve used arguments like this before and heard the objections. In most cases the arguments are too high-level to meet the business where it is trying to operate, i.e. the points are a bit askew for a company hoping to change its business results in the near-term.

He lists these points. For each I mention the typical objection:

  • 72% of businesses claim that improving customer experience (CX) is their #1 priority today.” – irrelevant
  • 80% of CEOs claim their customers’ experiences are superior, while only 8% of their customers think so.” – reflects the Dunning-Kreuger effect among those other dunces
  • 64% of people think that customer experience is more important than price in their choice of brand. (Gartner)” – we’ve been successful competing on price, too high-level to be actionable, is this for consumer, is it true in our industry
  • “Companies that excel at their customer experience grow revenues 4-8% above their market (Bain)” – too high-level to be actionable, is this for consumer, is it true in our industry, which improvements matter
  • $370 MM is the average amount of revenue generated by a modest improvement in Customer Experience over 3 years for a $1Billion company. (TemkinGroup)” – how much is modest, which improvements mattered, we are not in this cohort of companies
  • Superior CX creates stronger loyalty turning customer into promoters with a LTV of 6-14X that of detractors (Bain)” – we spend a lot on CSMs as it is, are we already reaping this benefit, if so it’s not enough
  • 89% of consumers cite customer experience as a critical loyalty Builder. (eConsultancy)” – correlative, sure but what’s the effect on revenue
  • 92% of customers who rates their experience as Good were likely to repurchase from that company compared to 9% of customers who rated their experience as very poor. (TempkinGroup)” – we’re already in the good category, is this true for our inductry, is this true for businesses like ours, and we’re B2C so it’s not relevant anhyow
  • Experience Led business have 1.7 higher customer retention, 1.9x return on spend and 1.6x higher customer satisfaction. (Forrester)” – than what, is this for consumer, is this true in our industry, what does it mean to be “experience-led” and is that even a sensible thing for us to consider given where we are and how we work
  • Brands with strong omni-channel engagement strategies retain an average of 89% of their customers (Aberdeen Group)” – we have good retention without “strong omni-channel engagement strategies” whatever that means
  • Consumers with an emotional connection to a brand have 306% higher lifetime value and stay with a brand for an average of 5.1 years. (Motista)” – consumer, not for our industry, we’re not in the emotion business, how does this apply to us specifically
  • Organizations classifying themselves as advanced at CX are 3x more likely to have exceeded their goals (Adobe Analytics)” – self-reported, correlative, and indirect
  • 86% of customers have stopped doing business with a company after a single negative customer experience. (Harris Interactive)” – this is for consumer, we don’t have a lot of direct customer interaction, we have projects to reduce the need for costly call center interactions, etc.

The common thread among these objections is, in essence “how does this high-level correlation apply to us, in our industry and situation, and guide our thinking now, in the near-term?” And that’s sensible. A company dissatisfied with its results wants to change something pronto and wants to choose that thing with some assurance that it will work.

The worst part, though is the last part, the part that will have a lot of UX and CX people cheering, the part that feels the best:

  • “Now go ask your CTO or PM to show you metrics on the value of their code stack. Or their shitty MVP. Or their roadmap of fake metrics, costs and delivery dates. Ask to see where the actual value in ceremonies and sprints is. Ask them to show you how failing at 95% of the time is profitable to the business. Ask them to show you the value in terrible useless apps like Jira, Confluence and GitHub. Ask them to show you how democratized research and crowd sourced discovery and Qualitative is profitable.”

If I were to uncork this in a leadership meeting it would (rightly) be dismissed as snarky and combative. “Ha ha, you suck too” is not going to win anyone over.

Instead, how do we express our success in terms of user and customer behavior? How do we choose to learn about those behaviors? How do we choose to run experiments and make interventions that change those behaviors in measurable ways? That is where attention to experience should come from. it’s common for organizations to be immature in this area, and we can lead them in the right direction.

Weekly wins for the week of 2023 11 20

  • A short work week is a beautiful thing. Having your entire department out for a day so you can catch up on administrative tasks is also nice, if a tad lonely.
  • The dreaded “calibration meeting” where you explain your people’s reviews to your peer managers (our first at this company, so likely to be a bit fraught) went just fine. No drama.
  • Although my three vaccines set my night (and the following day) on fire, folks were understanding and I recovered quickly.

Make MVPs experiments again

Background

There seems to be broad agreement within engineering leadership that MVP is (or should be) a philosophy of experimentation and hypothesis testing. An MVP should seek to validate a hypothesis. Literature discusses a Minimum Viable Product as the cheapest and fastest possible experiment that can answer a business question.

Yet our cross-functional teams seem often to be treating MVP as meaning “the first release” or, worse, “the first kind of quick get-it-out-there release” of a feature, improvement, or change. Some passion projects make it to general availability without cross-functional attention. Still other items wind up in the product with a “beta” flag and are not revisited. And rarely is data collected from these to determine if they are successful. We console ourselves with the idea that these are experimental but we often don’t behave as if we are actually experimenting. So we aren’t in these cases fulfilling the idea that an MVP is intended to collect validated learning about customers with minimal effort. What’s worse, we infrequently return to these releases to improve them, withdraw them, or build upon them.

Problem

The ultimate effect of the above is that there are items we call experiments, half-baked, scattered around the platform, and we have little understanding of their fitness to task for our customers and users. As a result

  • Things that should be either deprecated or improved lead to an incoherent and unusable experience for our users, making demos (sales) more difficult and depressing user satisfaction (which can contribute to churn)
  • The product has inconsistent interaction paradigms, styling, labeling, and messaging, which enhance the perception of poor usability even when things are sufficiently usable

We say we are “shipping to learn” but we are not doing the work needed to actually learn.

Goals

  • Improve the effectiveness of our live software experiments
  • Raise the level of quality visible to users
  • Manage downward the overall level of technical and interactive debt visible to users
  • Improve teamwork in part by firming up our working definitions of important terms such as MVP, alpha, beta, etc.
  • Consider not using the term MVP – it has become so distorted in its use that it lacks useful meaning in practice

Proposed Intervention

Make experiments experiments again by:

  • Carefully selecting projects for live experimentation according to
    • Limited scope
    • Clearly-articulated hypothesis
  • Pre-determine the success metrics and decision date for any experiment
  • Expose a limited set of customers to an experimental release, producing a basis for comparison (limited release customers vs the rest of the population)
  • At the appointed time, on the basis of the agreed-upon metrics, decide to do one of
    • Withdraw the experiment
    • Iterate on the experiment
    • Prepare for general availability/transition to regular feature development process

Cost/Benefit

Costs

  • Slight additional effort to plan experiments and evaluate the results
  • Additional cost to instrument MVPs so they can be evaluated
  • Cost in technically and interactively hardening experiments that succeed (should happen, doesn’t always at the moment)
  • Slight additional effort to withdraw experiments that fail

Benefits

  • Reduced technical and interactive debt due to each experiment having an end date and being either withdrawn or hardened
  • Reduced waste releasing fully baked or hardened projects that don’t meet customer needs
  • Improved interactive quality of items that make it to general availability may lead indirectly to less churn, greater CSAT, improved quality visible to users

Next Steps

  • For new ideas
    • Gain broad agreement on the definition of an experiment
    • Offer guidelines for when to run a software experiment live or to choose other means of experimentation
    • Offer guidelines for running an experiment
    • Pilot by
      • Selecting a hypothesis and means of testing it
      • Setting date and criteria for evaluation
      • Instrumenting, launching experiment, and collecting data
      • Evaluating the results at the appointed time and making the withdraw/iterate/prepare decision, creating a new project if needed 
    • Review feedback and results from pilot
    • Share best practices/expectations with department
    • Profit!
  • For old ideas (fits with our objective to deprecate crufty and unused things)
    • Offer items to address – what features seem to be experiments that were not evaluated, that are suspect?
    • How do we know if this is doing what it should?
      • We know what result it should produce – measure that
        • It’s doing well – are we happy with the quality?
          • Yes – yahtzee
          • No – remedial “prepare for general availability”
        • It’s doing OK – iterate on the experiment
        • It’s not doing well
          • Is that strategically relevant?
            • Yes – iterate on the experiment
            • No – candidate for withdrawal
      • We’re not sure what result it should produce
        • Is it being used?
          • Yes – how and why
          • No – candidate for withdrawal

Things we need to teach/encourage/expect/insist on

  • Working from hypotheses and measures
  • Feed the innovation pipeline with clarity on customer problems we are interested in solving
  • Consider examples at various sizes/complexities to break down into experiments
  • Need a company-wide framework to help us consider ideas for experimentation, from the customer problem/jtbd/benefit
  • Raise the level of direct product use knowledge/experience among engineers and designers – better have operated the thing you are working on

Weekly wins for the week of 2023 11 13

In spite of the organization’s urges to snap back to old ways (ways that got us to where we are, so are not sufficient on their own to change our results):

  • My people are not overreacting to the politics…
  • …assisted by their work in making us more customer-centric being shouted-out in public forums by the CEO…
  • …who is also publicly mentioning themes that have been part of my mission at the company since I was hired.

This all is setting me up perfectly to talk about quality and how the product needs to change (i.e. what we need to organize ourselves to produce) at the offsite after Thanksgiving.

Also,

  • I’m in the interview panel for the new product leader for a key product.

Interesting times!

Weekly wins for the week of 2023 11 06

I forgot that this week ended with Veterans Day. It’s awesome to have an unexpected day off, which is really a day to get a handful of other things done. And I did – I closed the to-do list for the day and most of tomorrow, had a lovely lunch with family, and the weekend is just beginning.

Meanwhile the confusion and “should”ing at work continues, but we have a leadership offsite coming and this situation has made my agenda clear for that meeting: “what do we mean by quality” and “what am I here to do.” Smaller topics can be set aside for now.

Weekly wins for the week of 2023 10 30

Things are a bit of a mess at work – a couple of key people have resigned, the 4th quarter roadmap is in turmoil, revenue is going up but there’s still plenty of ground to make up, and a recent launch and post-mortem has raised a lot of feelings and inspired a lot of shoulding among the leadership. (Folks should know not to should on themselves or others.) Even so,

  • That fraught project and launch, the one that has caused a lot of teeth to be gnashed, is getting good feedback and excitement form actual customers, and so far few bugs have been reported.
  • The roadmap, departures, and should situation present an opportunity (that I am happy to seize) to push us into more user-centricity and and agreement on quality, if only we can dispel some of the persistent misconceptions about the project triggering some of this swirl. There’s a leadership offsite coming up that I’m all too happy to throw a couple of thought-bombs into.
  • My team is being surprisingly even-keeled about the whole thing. I’m so grateful!