FAQ: OKR Not Working in My Company – Itamar Gilad - "Objectives and Key Results (OKR) is one of the most common processes in the industry, and yet, despite 50 years of history, it’s the one companies struggle with the most. This problem is most evident when talking with product people and product leaders — the same challenges and frustrations come up again and again."
The new AI-inclusive UX process – Greg Nudelman - "AI is like nothing we’ve ever done before. It demands a new level of rapid, flexible, user-centered thinking and rapid adjustment, neatly expressed in this new process diagram."
Designing software for things that rot – Vadim Drobinin - "The white mold was good. The green-grey mold was probably fine. The fuzzy black spot was... well, that's when I ended up in a midnight rabbit hole of forum posts debating whether that particular shade of black meant penicillium or something that would send me to hospital."
AI interfaces and the role of good writing – Nick DeLallo - "If you're lost in an AI user flow, blame the writing."
Trust isn't a feature, it's the interface – Tara Bird - "How do we build trust with AI? We can't. We can only build interfaces that help people trust AI."
Where's the AI design renaissance? – Erik D. Kennedy - "In 2022, I sent out some then-new Midjourney generations with the caption: If this doesn't revolutionize design, I don't know what will. Three years later, I am surprised to note… there hasn't been a revolution! This is particularly notable, since the last 3 years have been a non-stop firehose of AI hype."
Why User Self-Efficacy Matters for AI Product Success – Dr. Maria Panagiotidi - "As generative AI becomes increasingly prevalent in e-commerce and digital services, understanding the factors that drive user adoption has become a research priority. A new study published in Scientific Reports by Li, Zhou, Hu, and Liu (2025) provides compelling evidence that the answer lies in understanding how human-like features of AI systems influence user psychology through distinct cognitive pathways."
How to sprint
(Copied from a lengthy Slack message and edited for this medium)
A thought came up in a team retro at our offsite meeting, and I think it is relevant to PM and team expectations in sprint planning, agility, etc. – several recent topics:
Some teams have the habit of using sprint planning mainly to estimate tickets and arrange them into multiple sprints. This promotes rigidity of plan and squanders the power of a sprint to deliver a slice of functionality.
I've witnessed this sort of problem before, and it comes in part from a misunderstanding of what a sprint is for (it’s not just a planning increment but a shipping and goal increment) and what it means to be agile.
The usual way agile sprinting is taught is that each sprint needs a goal – a local smaller goal that serves the customer and serves the larger goal of the project/enhancement/initiative. Stories should be selected to serve that goal. Other stories should remain in the backlog as we are likely to learn things during the sprint that will inform the goal of the next sprint. With the overarching goal in mind we should be able to select smaller sprint goals on a sprint-by-sprint basis and avoid rigid pre-planning.
It gets interesting when we have a dependency diagram, because, while that’s a sensible and useful document that describes what is blocked by whom, it can lead us to deliver in an order established via the dependency diagram (a technically-driven order) and have very little to show for our work until the final sprint. Proceeding in this way is rigid and developer-focused, not customer-focused. So the challenge falls to the team, and especially the trio, to select a sprint goal that they can deliver, even badly, working around dependency problems where they can. It's not easy, but it gets us away from the rigidity that over-planning creates.
AI's UI Problem Is Actually A New Era of Software – Derek Xiao - "While traditional software broadcasts differentiation through interfaces (Salesforce's fields, Slack's channels), AI's value lies elsewhere: not in what you see, but what you don't."
New on the blog
I've taken some belated inspiration from Jason Kottke and introduced a feature that's new to me: short link posts between "normal" posts. (His look much like normal posts but I didn't want to work that hard.) I now have a way to add short links of interest to the blog; they'll appear in aggregate between the normal posts. Normal posts now end with a visual end-of-post token borrowed from magazines: a square. Since that square is inserted by the CSS :after pseudo-class and has an empty string for its content rule, it shouldn't be obtrusive to folks using screen readers.
There are a lot of links recently; I hope to keep it to a dull roar but the design world is changing so quickly that there's a lot to learn about.
Full posts appear in an RSS feed, found as before at https://jonplummer.com/feed.xml, and links now appear in their own feed at https://jonplummer.com/links-feed.xml. Enjoy!
NASA Ames Research Center archives – Beautiful Public Data – Jon Keegan - These archives prove that functional design can be beautiful. There's lots more to read (and learn about) at https://beautifulpublicdata.com/
Quick links on SaaS UI trends
Some collected examples of SaaS UI trends:
Decentralizing quality – Why moving judgment to the edges wins in the long run – Matt Ström-Awn - Everyone agrees quality matters, but we can't agree on what it is — or who gets to decide
Building trust in opaque systems – why the better AI gets at conversation, the worse we get at questioning it – Fabrizia Ausiello - "How do we know when to trust what someone tells us? In person conversations give us many subtle cues we might pick up on, but when they happen with AI system designed to sound perfectly human, we lose any sort of frame of reference we may have."
Escape from the Figma Titanic: Part 2 – UXer's guide to magic RAG – Greg Nudelman - Part of a series on pivoiting UX into designing context and evals, interesting perspective: "RAG (Retrieval-Augmented Generation) is a method for providing just-in-time content, enabling LLMs and AI Agents to answer users' questions and perform autonomous tasks. Recent advances in RAG make it a perfect magic carpet for ambitious user-centered UXers seeking to escape the sinking ship of pixel-pushing irrelevance."
Video: 3D-printed soap filament – Andrew Sink - Apropos of a discussion of 3d printing with odd materials such as cheese or peanut butter
Video: Best Cursor workflow that no one talks about… – AI Jason - A long video, but the first 18 minutes describe a workflow that goes from zero to application via what is essentially a solid product management process
Why I hate the MVP car – Dave Rupert - One can easily learn the wrong lessons from this popular software development meme that was not supposed to be a meme
Reimagining UX research in the AI age – Maven - Potentially interesting course on how AI is changing UX research. Maven has several interesting courses
AI-assisted design workflows: How UX teams move faster without sacrificing quality: What AI actually does in UX workflows – Cindy Brummer - Quick rundown of some AI-assisted UX practice use cases, in the midst of a helpfully elementary article about AI-assisted UX practice
Snowball vibe coding guide – Greg Nudelman - It's not that hard to get started vibe coding; this guide steps you into it without much pain and with a useful reminder to focus on the customer and data rather than the code or initial design
3 UX tips to make "aha moments" click: Too Good To Go onboarding – Growth.Design - I dig these little UX interventions. They don't always directly apply to our domain but there's almost always something to learn or be reminded of
The best way to learn AI for UX and product professionals – Model Context Experience - Potentially interesting training by the inimitable Peter Van Dijck
Storyboarding for AI-driven products – Greg Nudelman - I've long been a proponent of storyboarding, wireflows, and other methods that look at a process (or better, a network of processes) rather than focusing on single-screen mockups, especially when getting started putting an experience together. And this is becoming more important as we shift focus toward providing ai-powered capabilities to our customers. Here's a little very light reading on storyboarding that might be useful. (It's one article worth of content in a series of four posts, alas.)
GPT-5 hands-on: welcome to the stone age – Latent.Space - "OpenAI's long awaited GPT-5 is here and swyx + ben have been testing with OpenAI for a while now. tldr; we think it's a significant leap towards AGI"
Emerging patterns in designing for AI – My Name Is Jehad - "There are a few emerging patterns in designing for AI-first experiences. Here are a few"
Friends don't let friends make bad graphs – Chenxin Li, Ph.D. - An opinionated essay about good and bad practices in dataviz, with examples
Toward coordinated experience
Background
I realized the other day that we (UX and others) make workflow diagrams but don’t distinguish in them between “system advances you to the next step" or "offers you a link to a relevant thing” versus “user has to know to go somewhere else to do a necessary thing.” These are not at all equivalent; the arrows look the same, but those that denote “user has to know to go somewhere else to do a thing” should be maybe painfully ugly or broken or perhaps not exist at all to point out that a barrier exists.
We should be anticipating and lowering those barriers, and we often are not. As we make new things, we’re tempted to squeeze scope down tight enough that we can launch quickly, but end up erecting these barriers in the process.
UX folks are interested in producing what I'll call coordinated experiences, but have learned (to my chagrin) that it's hard to convince other folks in product development to participate, so they give up after a while. Since it is natural for engineering to look at the platform from the data model outward and assume that the appropriate interfaces match the data model in structure, and since there should otherwise be sufficient demand to produce coordinated experiences rather than the uncoordinated ones we usually make, my hypothesis is that there's some incentive/pressure/habit that is leading us in product development to scope projects such that we're not spending the effort required to produce coordinated experiences.
Here's the rub: coordinated experience comes at an incremental cost. Or it seems to; that increment is economically smart to spend as it trades a bit more team effort for avoidance of a large amount of repeated customer, CSM, onboarding, and support effort.
It’s normal to want to make sure our initial attempts at a feature or improvement are small and manageable to deliver quickly, learn from the result, and reduce risk. It’s natural to want to get a small bit of software out into the world to see if customers find it valuable so that it will rightly draw further investment or not. But if the initial experience, however slim, includes barriers that hamper its use, those barriers confound the adoption or task success metrics we take as evidence that the feature is valuable. If we descope to the point that barriers are present we are at cross-purposes with ourselves.
So the challenge is to make small scopes that nonetheless deliver an end-to-end experience.
Example: Uber’s MVP was a web form that offered to book a car to arrive in the very near future. There was no map, no phone app, no scheduling, no ratings, no service options, none of the larger experience that we associate with ride sharing services today. But the core experience worked without assistance, off-system work visible to the user, or delay: you could request a car to appear at a particular location and it would. There was no barrier between the desire for a ride and a ride other than filling out the form, which you could do on your own. It was a slender experience but a complete, coordinated one.
Coordinated experience defined
A coordinated experience is one in which
- it's clear to the user where to go to achieve their goal
- once there, the correct controls are intelligible and fall to hand as needed
- users are helped by sensible defaults and/or canned possibilities they can try, and that suggest how the system is best used, rather than being faced with just a blank form
- capabilities needed to accomplish the goal are available without having to go find them or know in advance where to get them
- these capabilities behave in predictable ways learned from elsewhere in our platform and from other software
- The system supports user confidence that they have achieved the correct result
- the result of their work is clear
- the right performance of the system is visible
- it's clear how to make changes
In short, the arrangement of capabilities and interfaces is governed by the use cases being supported, not necessarily the modularity of the back end.
(Yes, this is a basic UX concept, but it is forgotten so often that it needs a name outside of UX jargon.)
Coordinated experience tactics
We won’t need all of these tactics all the time for everything; it’s a set of possibilities. Some of these methods might be useful depending on the use cases we intend to support. Some will be overkill.
- Sensible defaults
- Galleries of canned options, useful at least as starting points
- Cross-linking to dependent or involved system objects that are managed elsewhere
- Reusing a capability (or presenting an otherwise stand-alone capability) as a module within a workflow where it is also needed
- Selectors that offer a choice of the existing examples of the required object type and a convenient way to create a new one
- Inspectors that explain a referenced system object without leaving the current context
- Traceability (explain how a result was arrived at)
- Simple versioning (accrue versions automatically as changes are made, allow an old one to be inspected and made current)
- Hierarchies revealed in list views
- (of course there are more; these are the ones that leap to mind at the moment)
As we build up a library of coordinated experience patterns, such as object inspectors, galleries, simple versioning methods, etc. it should become easier and easier to create coordinated experiences over time. But the core method is making sure that small scopes result in complete, coordinated workflows rather than fragmented ones.
Coordinated experience in the age of agents
It’s common to hear that agentic AI will make traditional interfaces obsolete, that if you can just ask the computer to do a thing you won’t also need a manual workflow to do that thing. This suggests an exciting future when software can focus on doing what users want (or at least ask for) rather than providing tools for users to do what users want.
We do aspire to a near future where systems handle more tasks, and chain tasks together to produce better results more quickly than a person would, simplifying interaction by speeding the user along toward the results they seek.
However, users will still need to
- Verify performance of the system – monitor the actions of the system and understand its effect on their business
- Verify agent recommendations – see that the agent’s recommendations or plans are sensible and well-founded in data, fostering confidence in the system and agent
- Verify performance and results of agent tasks – confirm that the agent has done things correctly, and understand the effect of these actions
- Understand the capabilities of the system – learn about what the system can do and how it is best used
- Make adjustments – correct errors in their own work and that of agents, try tweaks, follow hunches
This likely means that there’s plenty of interface! The emphasis shifts from the user directly manipulating the system toward the user being offered analyses and outcomes, but given the above needs users will continue to require systems that are
- self-explanatory
- transparent in their operations
- allow for direct inquiry into objects, and
- enable direct manipulation.
The advent of agentic workflows, by reducing direct user operation of the system, will intensify the need for interfaces and workflows that are simple, coordinated and re-learnable rather than interfaces that depend on training, consultation, or practice for user success.
Shape of AI: UX patterns for artificial Intelligence design - "Exploring how user experience will evolve with the growth of artificial intelligence; the rise of AI has caused a paradigm shift in how people interact with technology. Our interfaces may evolve, but the foundations of great design are more relevant than ever"
AI visual language: How to help users spot AI-powered features – Tia Sydorenko - "While sparkles have effectively monopolized the identification of AI, concerns about accessibility and intuitive communication have risen"
Emerging Ui/UX patterns in generative AI: a visual guide – Whitespectre - "From iconography, color, animation, labeling and more this guide delves into the key trends to consider when integrating Generative AI into your user experience"
How Apple fooled users with fake infinite scroll – Elvis Hsiao - A chat between designers and engineers can result in sleight-of-hand that seems nicer/fancier than the constraints would suggest
Beyond the hype: what AI can really do for product design – Nikita Samutin - "AI tools are improving fast, but it's still not clear how they fit into a real product design workflow. Nikita Samutin walks through four core stages — from analytics and ideation to prototyping and visual design — to show where AI fits and where it doesn't, illustrated with real-world examples"
Video: How to master Cursor Chat to build features with control – Ian Nuttall - "Chat with an LLM right in the sidebar, with one click apply of code changes, or running terminal commands"
Tools for vibe coding – MadeWithVibe - "A curated list of the best tools to boost your vibe coding workflow"
Fogg behavior model – BJ Fogg, Ph.D. - "The Fogg Behavior Model shows that three elements must converge at the same moment for a behavior to occur: motivation, ability, and a prompt"
Video: Research in the face of complexity: new sensibility for new situations – Dave Hora - "How we design and deliver products is changing; how research plays is following suit. Our environment is complex, and evolving. AI's role in design, product, research, and engineering is accelerating. New practices are shifting the boundaries between our functions"
WCAG vs EAA: understanding where WCAG stops and the EAA starts – Team Stark - "Many teams believe that meeting WCAG standards means their digital products are compliant with the European Accessibility Act (EAA). Using a clear comparison grounded in EN 301 549 and EN 17161, we extensively detail what's in and out of scope—and why organizations need to operationalize accessibility like they do privacy and security"
Video: DS live: context-based design systems: preparing for an AI-driven future – TJ Pitre - "Design systems aren't just about consistency anymore, they're becoming the connective tissue between people, tools, and intelligent agents"
BrowserStack accessibility design toolkit - "Adhere to WCAG by building an accessible component library, auto-detecting issues for web & mobile interfaces, and streamlining dev handoff"
What keyboard input makes sense for tabbing out of a rich text field? – UX Stack Exchange - Focus management for accessibility matters, and is complicated by modal dialogs and rich text editors
Personal names around the world – W3C Internationalization Committee - "How do people's names differ around the world, and what are the implications of those differences on the design of forms, databases, ontologies, etc. for the Web?"
Falsehoods programmers believe about names – Patrick McKenzie - "…so, as a public service, I'm going to list assumptions your systems probably make about names. All of these assumptions are wrong. Try to make less of them next time you write a system which touches names"
Video: Prompting 101 | Code w/ Claude – Anthropic - This session demonstrates best practices using a real-world car accident claim scenario
Video: the Pygmalion effect: in which a vibe-coding experiment becomes a million lines – Christian Crumlish - "What started as intuitive "vibe coding" with AI assistants quickly revealed why UX professionals need systematic Information Architecture approaches more than ever"
BSky: You CAN but you WON'T – @spavel.bsky.social - "…and this is a great lesson in User Experience Design: you CAN but you WON'T, because of how the software is designed"
What went right since October 2024?
So many things!
- Work
- I promoted someone
- I failed to promote someone, but learned a lot and it was the right decision
- We've had a couple of leadership offsites and they have been both pleasant and valuable
- I've guided my team from AI-skeptic or AI-agnostic to AI-curious, and written a quick position paper to explain our approach to both using AI tools and designing for AI-powered experiences
- UX people are strong participants in product trios, at long last
- We're hiring!
- Home
- We've caught up on a handful of long-overdue home projects, just in time for the summer heat. Curtains, blinds, gym flooring, patch and paint, more curtains… there's more to do, always, but good progress after a bit of a stall
- The ADU is now occupied
- The girl is enjoying her six-week ballet intensive in a far-off state
- I got my GMRS license. Say hello to WSIX524
- Mr. Fixit has branched out into a little light metal work including repairing a watering can and making a house key easier for a blind person to use