Jon Plummer

Today I Learned

Our position on AI tools

(This is a work in progress, but a pretty good start)

Designing AI-powered product experiences

User needs and customer problem first

Solving a valuable customer problem is paramount. Before selecting any technological solution, including AI, we prioritize understanding user needs and clearly defining the problem we aim to solve. Any AI application must serve a genuine, identified user need, rather than being a solution in search of a problem.

Transparency, explainability, and trust

We recognize that users may be curious, or even apprehensive, about how AI-powered features operate. While full algorithmic explainability may not always be feasible or necessary, we commit to being transparent about the inputs and context that drive AI outputs. We hope to empower users with a sense of control, offering opportunities to validate choices, preview actions, and interact with AI as an assistant before letting it run as an autonomous agent. Maintaining an audit trail of AI actions also supports accountability and trust.

Handling errors and edge cases

We acknowledge that AI-powered features will sometimes produce wrong or unexpected outputs. Our design approach for these scenarios focuses on graceful error handling and keeping the human in the loop. This means

  • Anticipating and mitigating potential issues through careful AI setup and training
  • Designing interfaces that offer previews, recommendations, and clear actions rather than proceeding blindly
  • Ensuring mechanisms for users to easily correct, override, or provide feedback on AI outputs
  • Maintaining a design philosophy where the AI recommends and assists, allowing users to retain ultimate control until they explicitly release the system to act.

Ethical design and bias mitigation

We strive to reduce bias in AI-powered features by

  • Grounding our understanding in real customer knowledge rather than internal assumptions.
  • Working with and analyzing customer data responsibly, without alteration, and ensuring its privacy and security
  • Establishing processes for monitoring the output of our features for unintended biases that may emerge

Iteration and learning through metrics

  • Clear project goals define success.
  • Success metrics (e.g., accuracy, recall, task completion rates) and experiential metrics (e.g., user satisfaction, perceived control, trust) are established upfront.
  • Continuous monitoring and analysis of these metrics drive iterative improvement, allowing us to refine the AI's performance and the user experience over time.

Using AI tools in day-to-day UX work

Our UX team embraces the strategic and responsible integration of AI tools into our daily workflows to enhance our capabilities and deliver more valuable experiences.

Strategic tool adoption and augmentation

We are actively experimenting with AI tools like Figma Make, ChatGPT, and Gemini to understand their potential. Our focus is not merely on speed, but on how these tools can enhance our ability to deliver valuable and usable experiences. We view AI primarily as an augmentation to our existing skills, particularly for

  • Inspiration and ideation: Generating diverse concepts, content variations, or design alternatives.
  • Early-stage prototyping: Quickly sketching out ideas.
  • Analyzing research data: Identifying patterns or themes in qualitative data (with careful oversight).

Maintaining UX quality through human oversight

The ultimate responsibility for UX quality remains with the human designer and the members of the team with which they work. When using AI tools, each designer is accountable for the quality and accuracy of the output on their projects, regardless of AI assistance. We commit to human oversight and critical evaluation of any AI-generated content or insights. AI is a tool to assist, not replace, the designer's judgment, expertise, and empathy. All AI-assisted work undergoes the same review and validation processes as any UX work.

Continuous learning and cross-pollination

We encourage designers to

  • Actively experiment with new AI tools and techniques
  • Share their learnings and best practices with the wider UX team and their project teams
  • Replicate and build upon the successful experiments of others
  • Embrace a fluidity in job boundaries, recognizing that AI tools may enable designers to contribute to areas traditionally outside core UX, fostering greater cross-functional collaboration

Ethical use of AI tools and intellectual property

Our ethical considerations for designing AI-powered products extend to our use of AI tools. We commit to

  • Transparency: Clearly acknowledging when AI tools have been used in our work, internally and externally where relevant. We will never misrepresent AI-assisted work as purely human-created
  • Data privacy and IP: Exercising caution regarding proprietary or sensitive customer data when interacting with external AI models. We will ensure we adhere to company policies and legal guidelines regarding data input into AI tools and the intellectual property of generated outputs
  • Maintaining control: Never ceding our understanding or control of customer knowledge, the design process, or design work to AI tools. The human designer remains the expert and ultimate decision-maker, responsible for the integrity of their work and the insights and design artifacts they share

What went right in October?

So many things, in retrospect:

  • Home progress!
    • Landscaping is done, trees are in
    • Storm drains are cleaned
    • The network is regularly providing 1200Mbps, after modernizing a bit (paid for by selling the slightly older equipment)
  • Work progress!
    • The concept sprint I led was a resounding success – the execs wish they could sell our plan now, but they recognize they need to wait until they fund it and we build it – and there's talk of more
    • The pendulum is swinging back toward being more customer-centric
  • Life progress!
    • I pulled 410 last week by doing the plate math wrong, and it was no problem at all
    • At my last appointment my PT said "good job;" I bet PTs don't say that often

Apropos of…nothing (bitcoin)

A strategic reserve in (of?) a commodity implies that it’s in the U.S. strategic interest to invest to be protected from price shocks or supply restrictions due to the importance of the commodity to the economy or military readiness.

The best way to protect from bitcoin price shock is to not buy any. The best way to protect from a bitcoin supply restriction is to not use any.

Both of these are free.

A quick word about taking feedback

When receiving feedback on a design or other work the important thing is to see the intent behind the feedback and address that, rather than take the specific advice or try to appease the feedback-giver.

Only by addressing the intent behind the feedback can the work be improved. yes, this might mean taking the specific advice, but it might not. The specific advice may not be the right answer.

Appeasement is waste.

Weekly wins for the week of 2024 07 15

  • Vacations are winding down and people are starting to come back to work. The team is filling out again.
  • I convinced the SVP of Product to be quizzed about our product strategy – we'll put a script and some visuals together for our annual Company Connect and hopefully address longstanding complaints that people aren't sure how their work contributes to our strategy. This has raised other interesting topics that might also become topics, like how we really make money and where it goes, how our pricing and packaging works, etc. A lot of the things I was hoping to accomplish this year have been deprioritized (rightly) due to some technical pickles we find ourselves in, so this represents an opportunity to be influential beyond the usual process sand product stuff we do on the reg; fingers crossed.
  • The ladies are back and the house is no longer empty. This also means my diet will improve – bachelor-mode Jon is (far) less disciplined than husband-mode Jon.