Jon Plummer

Today I Learned

Toward coordinated experience

Background

I realized the other day that we (UX and others) make workflow diagrams but don’t distinguish in them between “system advances you to the next step" or "offers you a link to a relevant thing” versus “user has to know to go somewhere else to do a necessary thing.” These are not at all equivalent; the arrows look the same, but those that denote “user has to know to go somewhere else to do a thing” should be maybe painfully ugly or broken or perhaps not exist at all to point out that a barrier exists.

We should be anticipating and lowering those barriers, and we often are not. As we make new things, we’re tempted to squeeze scope down tight enough that we can launch quickly, but end up erecting these barriers in the process.

UX folks are interested in producing what I'll call coordinated experiences, but have learned (to my chagrin) that it's hard to convince other folks in product development to participate, so they give up after a while. Since it is natural for engineering to look at the platform from the data model outward and assume that the appropriate interfaces match the data model in structure, and since there should otherwise be sufficient demand to produce coordinated experiences rather than the uncoordinated ones we usually make, my hypothesis is that there's some incentive/pressure/habit that is leading us in product development to scope projects such that we're not spending the effort required to produce coordinated experiences.

Here's the rub: coordinated experience comes at an incremental cost. Or it seems to; that increment is economically smart to spend as it trades a bit more team effort for avoidance of a large amount of repeated customer, CSM, onboarding, and support effort.

It’s normal to want to make sure our initial attempts at a feature or improvement are small and manageable to deliver quickly, learn from the result, and reduce risk. It’s natural to want to get a small bit of software out into the world to see if customers find it valuable so that it will rightly draw further investment or not. But if the initial experience, however slim, includes barriers that hamper its use, those barriers confound the adoption or task success metrics we take as evidence that the feature is valuable. If we descope to the point that barriers are present we are at cross-purposes with ourselves.

So the challenge is to make small scopes that nonetheless deliver an end-to-end experience.

Example: Uber’s MVP was a web form that offered to book a car to arrive in the very near future. There was no map, no phone app, no scheduling, no ratings, no service options, none of the larger experience that we associate with ride sharing services today. But the core experience worked without assistance, off-system work visible to the user, or delay: you could request a car to appear at a particular location and it would. There was no barrier between the desire for a ride and a ride other than filling out the form, which you could do on your own. It was a slender experience but a complete, coordinated one.

Coordinated experience defined

A coordinated experience is one in which

  • it's clear to the user where to go to achieve their goal
  • once there, the correct controls are intelligible and fall to hand as needed
  • users are helped by sensible defaults and/or canned possibilities they can try, and that suggest how the system is best used, rather than being faced with just a blank form
  • capabilities needed to accomplish the goal are available without having to go find them or know in advance where to get them
  • these capabilities behave in predictable ways learned from elsewhere in our platform and from other software
  • The system supports user confidence that they have achieved the correct result
    • the result of their work is clear
    • the right performance of the system is visible
    • it's clear how to make changes

In short, the arrangement of capabilities and interfaces is governed by the use cases being supported, not necessarily the modularity of the back end.

(Yes, this is a basic UX concept, but it is forgotten so often that it needs a name outside of UX jargon.)

Coordinated experience tactics

We won’t need all of these tactics all the time for everything; it’s a set of possibilities. Some of these methods might be useful depending on the use cases we intend to support. Some will be overkill.

  • Sensible defaults
  • Galleries of canned options, useful at least as starting points
  • Cross-linking to dependent or involved system objects that are managed elsewhere
  • Reusing a capability (or presenting an otherwise stand-alone capability) as a module within a workflow where it is also needed
  • Selectors that offer a choice of the existing examples of the required object type and a convenient way to create a new one
  • Inspectors that explain a referenced system object without leaving the current context
  • Traceability (explain how a result was arrived at)
  • Simple versioning (accrue versions automatically as changes are made, allow an old one to be inspected and made current)
  • Hierarchies revealed in list views
  • (of course there are more; these are the ones that leap to mind at the moment)

As we build up a library of coordinated experience patterns, such as object inspectors, galleries, simple versioning methods, etc. it should become easier and easier to create coordinated experiences over time. But the core method is making sure that small scopes result in complete, coordinated workflows rather than fragmented ones.

Coordinated experience in the age of agents

It’s common to hear that agentic AI will make traditional interfaces obsolete, that if you can just ask the computer to do a thing you won’t also need a manual workflow to do that thing. This suggests an exciting future when software can focus on doing what users want (or at least ask for) rather than providing tools for users to do what users want.

We do aspire to a near future where systems handle more tasks, and chain tasks together to produce better results more quickly than a person would, simplifying interaction by speeding the user along toward the results they seek.

However, users will still need to

  • Verify performance of the system – monitor the actions of the system and understand its effect on their business
  • Verify agent recommendations – see that the agent’s recommendations or plans are sensible and well-founded in data, fostering confidence in the system and agent
  • Verify performance and results of agent tasks – confirm that the agent has done things correctly, and understand the effect of these actions
  • Understand the capabilities of the system – learn about what the system can do and how it is best used
  • Make adjustments – correct errors in their own work and that of agents, try tweaks, follow hunches

This likely means that there’s plenty of interface! The emphasis shifts from the user directly manipulating the system toward the user being offered analyses and outcomes, but given the above needs users will continue to require systems that are

  • self-explanatory
  • transparent in their operations
  • allow for direct inquiry into objects, and
  • enable direct manipulation.

The advent of agentic workflows, by reducing direct user operation of the system, will intensify the need for interfaces and workflows that are simple, coordinated and re-learnable rather than interfaces that depend on training, consultation, or practice for user success.

What went right since October 2024?

So many things!

  • Work
    • I promoted someone
    • I failed to promote someone, but learned a lot and it was the right decision
    • We've had a couple of leadership offsites and they have been both pleasant and valuable
    • I've guided my team from AI-skeptic or AI-agnostic to AI-curious, and written a quick position paper to explain our approach to both using AI tools and designing for AI-powered experiences
    • UX people are strong participants in product trios, at long last
    • We're hiring!
  • Home
    • We've caught up on a handful of long-overdue home projects, just in time for the summer heat. Curtains, blinds, gym flooring, patch and paint, more curtains… there's more to do, always, but good progress after a bit of a stall
    • The ADU is now occupied
    • The girl is enjoying her six-week ballet intensive in a far-off state
    • I got my GMRS license. Say hello to WSIX524
    • Mr. Fixit has branched out into a little light metal work including repairing a watering can and making a house key easier for a blind person to use

Our position on AI tools

(This is a work in progress, but a pretty good start)

Designing AI-powered product experiences

User needs and customer problem first

Solving a valuable customer problem is paramount. Before selecting any technological solution, including AI, we prioritize understanding user needs and clearly defining the problem we aim to solve. Any AI application must serve a genuine, identified user need, rather than being a solution in search of a problem.

Transparency, explainability, and trust

We recognize that users may be curious, or even apprehensive, about how AI-powered features operate. While full algorithmic explainability may not always be feasible or necessary, we commit to being transparent about the inputs and context that drive AI outputs. We hope to empower users with a sense of control, offering opportunities to validate choices, preview actions, and interact with AI as an assistant before letting it run as an autonomous agent. Maintaining an audit trail of AI actions also supports accountability and trust.

Handling errors and edge cases

We acknowledge that AI-powered features will sometimes produce wrong or unexpected outputs. Our design approach for these scenarios focuses on graceful error handling and keeping the human in the loop. This means

  • Anticipating and mitigating potential issues through careful AI setup and training
  • Designing interfaces that offer previews, recommendations, and clear actions rather than proceeding blindly
  • Ensuring mechanisms for users to easily correct, override, or provide feedback on AI outputs
  • Maintaining a design philosophy where the AI recommends and assists, allowing users to retain ultimate control until they explicitly release the system to act.

Ethical design and bias mitigation

We strive to reduce bias in AI-powered features by

  • Grounding our understanding in real customer knowledge rather than internal assumptions.
  • Working with and analyzing customer data responsibly, without alteration, and ensuring its privacy and security
  • Establishing processes for monitoring the output of our features for unintended biases that may emerge

Iteration and learning through metrics

  • Clear project goals define success.
  • Success metrics (e.g., accuracy, recall, task completion rates) and experiential metrics (e.g., user satisfaction, perceived control, trust) are established upfront.
  • Continuous monitoring and analysis of these metrics drive iterative improvement, allowing us to refine the AI's performance and the user experience over time.

Using AI tools in day-to-day UX work

Our UX team embraces the strategic and responsible integration of AI tools into our daily workflows to enhance our capabilities and deliver more valuable experiences.

Strategic tool adoption and augmentation

We are actively experimenting with AI tools like Figma Make, ChatGPT, and Gemini to understand their potential. Our focus is not merely on speed, but on how these tools can enhance our ability to deliver valuable and usable experiences. We view AI primarily as an augmentation to our existing skills, particularly for

  • Inspiration and ideation: Generating diverse concepts, content variations, or design alternatives.
  • Early-stage prototyping: Quickly sketching out ideas.
  • Analyzing research data: Identifying patterns or themes in qualitative data (with careful oversight).

Maintaining UX quality through human oversight

The ultimate responsibility for UX quality remains with the human designer and the members of the team with which they work. When using AI tools, each designer is accountable for the quality and accuracy of the output on their projects, regardless of AI assistance. We commit to human oversight and critical evaluation of any AI-generated content or insights. AI is a tool to assist, not replace, the designer's judgment, expertise, and empathy. All AI-assisted work undergoes the same review and validation processes as any UX work.

Continuous learning and cross-pollination

We encourage designers to

  • Actively experiment with new AI tools and techniques
  • Share their learnings and best practices with the wider UX team and their project teams
  • Replicate and build upon the successful experiments of others
  • Embrace a fluidity in job boundaries, recognizing that AI tools may enable designers to contribute to areas traditionally outside core UX, fostering greater cross-functional collaboration

Ethical use of AI tools and intellectual property

Our ethical considerations for designing AI-powered products extend to our use of AI tools. We commit to

  • Transparency: Clearly acknowledging when AI tools have been used in our work, internally and externally where relevant. We will never misrepresent AI-assisted work as purely human-created
  • Data privacy and IP: Exercising caution regarding proprietary or sensitive customer data when interacting with external AI models. We will ensure we adhere to company policies and legal guidelines regarding data input into AI tools and the intellectual property of generated outputs
  • Maintaining control: Never ceding our understanding or control of customer knowledge, the design process, or design work to AI tools. The human designer remains the expert and ultimate decision-maker, responsible for the integrity of their work and the insights and design artifacts they share

What went right in October?

So many things, in retrospect:

  • Home progress!
    • Landscaping is done, trees are in
    • Storm drains are cleaned
    • The network is regularly providing 1200Mbps, after modernizing a bit (paid for by selling the slightly older equipment)
  • Work progress!
    • The concept sprint I led was a resounding success – the execs wish they could sell our plan now, but they recognize they need to wait until they fund it and we build it – and there's talk of more
    • The pendulum is swinging back toward being more customer-centric
  • Life progress!
    • I pulled 410 last week by doing the plate math wrong, and it was no problem at all
    • At my last appointment my PT said "good job;" I bet PTs don't say that often

Apropos of…nothing (bitcoin)

A strategic reserve in (of?) a commodity implies that it’s in the U.S. strategic interest to invest to be protected from price shocks or supply restrictions due to the importance of the commodity to the economy or military readiness.

The best way to protect from bitcoin price shock is to not buy any. The best way to protect from a bitcoin supply restriction is to not use any.

Both of these are free.