Background
I realized the other day that we (UX and others) make workflow diagrams but don’t distinguish in them between “system advances you to the next step” or “offers you a link to a relevant thing” versus “user has to know to go somewhere else to do a necessary thing.” These are not at all equivalent; the arrows look the same, but those that denote “user has to know to go somewhere else to do a thing” should be maybe painfully ugly or broken or perhaps not exist at all to point out that a barrier exists.
We should be anticipating and lowering those barriers, and we often are not. As we make new things, we’re tempted to squeeze scope down tight enough that we can launch quickly, but end up erecting these barriers in the process.
UX folks are interested in producing what I’ll call coordinated experiences, but have learned (to my chagrin) that it’s hard to convince other folks in product development to participate, so they give up after a while. Since it is natural for engineering to look at the platform from the data model outward and assume that the appropriate interfaces match the data model in structure, and since there should otherwise be sufficient demand to produce coordinated experiences rather than the uncoordinated ones we usually make, my hypothesis is that there’s some incentive/pressure/habit that is leading us in product development to scope projects such that we’re not spending the effort required to produce coordinated experiences.
Here’s the rub: coordinated experience comes at an incremental cost. Or it seems to; that increment is economically smart to spend as it trades a bit more team effort for avoidance of a large amount of repeated customer, CSM, onboarding, and support effort.
It’s normal to want to make sure our initial attempts at a feature or improvement are small and manageable to deliver quickly, learn from the result, and reduce risk. It’s natural to want to get a small bit of software out into the world to see if customers find it valuable so that it will rightly draw further investment or not. But if the initial experience, however slim, includes barriers that hamper its use, those barriers confound the adoption or task success metrics we take as evidence that the feature is valuable. If we descope to the point that barriers are present we are at cross-purposes with ourselves.
So the challenge is to make small scopes that nonetheless deliver an end-to-end experience.
Example: Uber’s MVP was a web form that offered to book a car to arrive in the very near future. There was no map, no phone app, no scheduling, no ratings, no service options, none of the larger experience that we associate with ride sharing services today. But the core experience worked without assistance, off-system work visible to the user, or delay: you could request a car to appear at a particular location and it would. There was no barrier between the desire for a ride and a ride other than filling out the form, which you could do on your own. It was a slender experience but a complete, coordinated one.
Coordinated experience defined
A coordinated experience is one in which
- it’s clear to the user where to go to achieve their goal
- once there, the correct controls are intelligible and fall to hand as needed
- users are helped by sensible defaults and/or canned possibilities they can try, and that suggest how the system is best used, rather than being faced with just a blank form
- capabilities needed to accomplish the goal are available without having to go find them or know in advance where to get them
- these capabilities behave in predictable ways learned from elsewhere in our platform and from other software
- The system supports user confidence that they have achieved the correct result
- the result of their work is clear
- the right performance of the system is visible
- it’s clear how to make changes
In short, the arrangement of capabilities and interfaces is governed by the use cases being supported, not necessarily the modularity of the back end.
(Yes, this is a basic UX concept, but it is forgotten so often that it needs a name outside of UX jargon.)
Coordinated experience tactics
We won’t need all of these tactics all the time for everything; it’s a set of possibilities. Some of these methods might be useful depending on the use cases we intend to support. Some will be overkill.
- Sensible defaults
- Galleries of canned options, useful at least as starting points
- Cross-linking to dependent or involved system objects that are managed elsewhere
- Reusing a capability (or presenting an otherwise stand-alone capability) as a module within a workflow where it is also needed
- Selectors that offer a choice of the existing examples of the required object type and a convenient way to create a new one
- Inspectors that explain a referenced system object without leaving the current context
- Traceability (explain how a result was arrived at)
- Simple versioning (accrue versions automatically as changes are made, allow an old one to be inspected and made current)
- Hierarchies revealed in list views
- (of course there are more; these are the ones that leap to mind at the moment)
As we build up a library of coordinated experience patterns, such as object inspectors, galleries, simple versioning methods, etc. it should become easier and easier to create coordinated experiences over time. But the core method is making sure that small scopes result in complete, coordinated workflows rather than fragmented ones.
Coordinated experience in the age of agents
It’s common to hear that agentic AI will make traditional interfaces obsolete, that if you can just ask the computer to do a thing you won’t also need a manual workflow to do that thing. This suggests an exciting future when software can focus on doing what users want (or at least ask for) rather than providing tools for users to do what users want.
We do aspire to a near future where systems handle more tasks, and chain tasks together to produce better results more quickly than a person would, simplifying interaction by speeding the user along toward the results they seek.
However, users will still need to
- Verify performance of the system – monitor the actions of the system and understand its effect on their business
- Verify agent recommendations – see that the agent’s recommendations or plans are sensible and well-founded in data, fostering confidence in the system and agent
- Verify performance and results of agent tasks – confirm that the agent has done things correctly, and understand the effect of these actions
- Understand the capabilities of the system – learn about what the system can do and how it is best used
- Make adjustments – correct errors in their own work and that of agents, try tweaks, follow hunches
This likely means that there’s plenty of interface! The emphasis shifts from the user directly manipulating the system toward the user being offered analyses and outcomes, but given the above needs users will continue to require systems that are
- self-explanatory
- transparent in their operations
- allow for direct inquiry into objects, and
- enable direct manipulation.
The advent of agentic workflows, by reducing direct user operation of the system, will intensify the need for interfaces and workflows that are simple, coordinated and re-learnable rather than interfaces that depend on training, consultation, or practice for user success.