Weekly wins for the week of 2023 04 03

  • Good Friday Spring Holiday โ€“ this day off caught many at work by pleasant surprise.
  • A chat with the SVP of Engineering reminded me that I’m overdue on publicizing and gathering support for the high-level version of my plan for the department. My goal for the week is set!
  • While Swift syntax is distinctly weird to my “C-like” early Javascrip-trained eyes, plowing through the Apple-provided tutorials is helpful. Somewhat. Even so, the use/placement of dots in something like the following is distinctly odd to me:
VStack() {
	Image(systemName: "pin")
	Text("Placeholder text for now")

Yeah, I’m team tabs.

Weekly wins for the week of 2023 03 27

  • It wasn’t COVID-19, just a sinus infection that dripped into crevices and alarmed passers-by. I’m nearly 100% now.
  • Nothing caught on fire while I was out.
  • The hackathon a few weeks ago was inspiring enough that I’ve started to learn a little but about iOS app development to fulfill a personal project. Since my last real coding experience was a bunch of kooky javascript stuff around the turn of the century I’m well out of my depth, but it’s both fun and frustrating; there’s a point at which I’m familiar, I’m familiar, I’m familiar with the concepts in a tutorial and then whap I’m met with something totally baffling. As the ultramarathoners say, RFM.

Weekly wins for the week of 2023 03 20

It’s spring break, that awkward nearly month-long period when employees with kids start disappearing for a week at a time. Every year the challenge is to make sure that people have prepared their teams for their absence; that enough is done and questions answered (and backup help secured) that whoever remains can proceed without much trouble, and especially that the person’s absence doesn’t come as a surprise.

When I asked each person this week “what do we need to do to prepare your team for your absence” each person’s answer started with what they’ve already done. That’s a big win.

Weekly wins for the week of 2023 03 13

Last week was all about an off-site meeting involving the product and engineering groups.

(What does off-site mean in today’s remote-first environment? Never mind.)

  • It was a good set of sessions! in particular, the hackathon presentations were funny and inspiring. Enough so that I’m tempted to get back into coding a little. Maybe I could hack(athon) a bit one day.
  • My team really came together working on our “elevator-pitch-style” team charter. I have some homework to do to set us up for the last step, but it’s a pleasure to collect the good thoughts of good people trying hard to improve.
  • A coworker made their displeasure with the past UX regime abundantly clear in a group setting, and I decided I would not let it bother me. Even so, it did, for a bit. But later when we talked about it and I told them I had decided not to let it bother me, it actually no longer bothered me. It worked!

ChatGPT is going to tempt me to be more skeptical of your work

Lately I’ve been seeing a lot of posts on LinkedIn and elsewhere crowing about how ChatGPT could be used to perform UX tasks. Examples:

The enthusiasm is great but this level of shortcutting worries me. It’s okay to ask ChatGPT to find things to read about a topic if you’re fine with some of the results not being appropriate or even not existing. But I don’t think it is fine to ask it how to do something or to perform research on your behalf. ChatGPT’s emphasis is on delivering something that looks sensible, nothing more.

ChatGPT is not a knowledge model, it’s a language model. If you’d like to dive into just how ChatGPT works, Steven Wolfram has a great explanation in his article What is ChatGPT Doingโ€ฆand Why Does It Work?

The core idea is that ChatGPT is very good at figuring out what a very likely next word might be based on the prior words it has chosen, the prompt that it was given, and word frequency and proximity data derived from a huge amount of copy scraped from the internet. Since it doesn’t actually know anything it does a great job of making plausible-sounding English of the sort you might find anywhere on the net. Since the internet is the training data, the quality of the output is only about as good as the average quality of internet writing, which is not fabulous.

It’s important to remember that there’s no attempt to make sure that what ChatGPT returns is factually accurate. Bloggers and reporters experimenting with ChatGPT have accused it of making things up or “hallucinating,” but this complaint assumes that accuracy should be expected. It should not. ChatGPT is just trying to be plausible.

I’m not saying not to use ChatGPT. It’s great as a memory jogger, or to avoid the tyranny of the blank page. It makes a perfectly shitty first draft that you can then do real work on. But if you just accept what it has to say you are choosing a below-average and likely nonsensical result. And if you use it as a substitute for doing the work that ChatGPT is simulating the output of, you are lying to yourself and others.

Since ChatGPT produces superficially plausible output, hiring managers are going to need to scrutinize a candidate’s work more closely, and quiz a candidate more carefully. (Yes, we should already be doing this.)

On a Slack team I’m on there was a recent debate as to whether or not an engineering manager should accept ChatGPT output as the answer to a coding test, if during their regular duties a new hire would be allowed to use resources like StackOverflow, which often provides code snippets, Google Search, or even ChatGPT. What do you think, given the above?

  1. ChatGPT can produce reasonable looking Python and other languages; a co-worker successfully asked it to return JSON in response to a bit of copy where someone asked for an appointment on a specific date and time.

Weekly wins for the week of 2023 03 06

I’m light on wins this week. The most challenging part of posting wins each week is being conscious of them!

  • I wrote two blog posts this week, and without waiting for the weekend. I find that I’ll write a little explainer for someone in the normal course of business, mentoring, etc. and realize that it would make a pretty good short post. (This might be my main source of content.)
  • I’ve an idea for another quick post re the favorite topic these days: ChatGPT. People’s un-ironic embrace of ChatGPT as a substitute for actually doing the work yourself is understandable but alarming. It’s going to cause us to look at people’s work a bit differently for a while. (Actually, an idea is only a win of you execute on it, because ideas are cheap.)

Don’t interrupt the natural behavior

Don’t interrupt the user’s natural behavior. Enhance or extend the natural behavior, but remain compatible with it.

While I worked for Belkin we made a remote-controllable plug-in switch module. You would plug this thing in to the wall and then plug a lamp or something into it, and it allowed you to control the lamp with your phone, turn it on or off, set a timer, etc. It was great, it sold pretty well, it was totally DIY-able, and pretty understandable. But folks who used it were ultimately lukewarm about it โ€“ they didn’t love it. It interrupted the natural behavior of turning on and off the lamp. Instead of going to the stem of the lamp you had to either use your phone or push a button on the unit, which was at outlet-level on the wall. If someone turned the lamp off the old-fashioned way you could not use the unit to turn it back on without ALSO turning the lamp on the old way, blunting its usefulness.

A person has thousands of hours of practice turning on and off your lamp in the way it affords. And suddenly they and everyone else in their household needs to stop doing that and do some new, unfamiliar, and potentially awkward thing to just turn the lamp on or off.

Later we sold the same guts in a device that replaced a wall switch. People were much happier with this because there was nothing to get “wrong” โ€“ if someone pressed the wall switch to turn the light off, you could still operate it with the phone or as a normal wall switch. It fit naturally into people’s existing behavior and enhanced it.

Concept selection in Horizon 2: Concept

Michael asks:

Once you have a variety of potential solutions/designs how do you know which one(s) to choose to iterate on and which ones to discard?

The ideal concept is something you can make without too much difficulty, delivers the intended benefit, is intelligible, leaves you open to future improvement, and has a ready way of witnessing success.

Typically you’ll be working on a team with (or at least have a high level of contact with) someone in charge of the product (a product manager, usually) and someone in charge of engineering (a software architect or software engineer, usually) โ€“ with design this group forms the “product trio.” Each person on the trio has expertise in or evidence for some of the criteria by which you might evaluate concepts. With these in mind the negotiation of which concept (or what parts of which concepts) can begin. For example:

UX โ€“ Is this concept intelligible to users, i.e. do they understand it and believe that it will deliver the desired benefit? Will this concept create a pleasant experience for the people we hope to serve? Does this concept use familiar interaction paradigms? Is this a concept we can build on later or will it need to be scrapped to add functionality? Can we partialize this concept if we need to reduce scope? Will we be able to detect whether or not people are successful in using it (e.g. by counting orders or conversions of some kind, or some other measure of user outcome)? Etc.

Product โ€“ will this concept deliver the intended benefit? Is this concept strategically relevant? Does the cost/complexity fit our appetite to do the work? Does it seem intelligible to customers (who might be distinct from users)? Can we partialize this concept if we need to reduce scope? Can we add capabilities to this concept to improve it in the future? Will we be able to detect whether or not use of the concept is helping the business (e.g. by counting orders or conversions of some kind, or some other measure of business outcome)? Etc.

Engineering โ€“ is this concept feasible? Does it use data we have available or can get readily? Does it use technology and services we are familiar with or can learn readily?ย Does the cost/complexity fit our appetite to do the work? Does it lead us into an area we want to develop technically or to strengthen existing capabilities? Etc.

You can see some overlap. For example, the appetite question is PM + Engineering, for example. Customer and user intelligibility is PM + UX. There are others.

In an individual case study lacking these team members you will need to guess at some of these, or at least reveal your thinking about your concept selection.

Weekly wins for the week of 2023 02 27

  • For the first time in several weeks I did not type “weekly winds.”
  • Quarterly coaching/reviews are done. Annual merits are done. Things are almost in place for a productive offsite meeting in about a week’s time. Everything’s coming up Millhouse (except that the product has plenty I’d like to change or fix, not unexpected).
  • I received good feedback this week from one of the people I support, indirectly through my supervisor. That’s nice.
  • I used a power bar rather than a deadlift bar to deadlift this week, and it was fine. The knurling was more aggressive than my hands prefer, but the weight went up just the same.

Still more on expectations of quality

The general idea is that scope should scale but quality should not. All of these are achievable in small scopes and if we care about quality are not โ€œextraโ€ costs.

  • If it is not usable we will not learn what we hope to learn from an alpha or beta โ€“ our learning will be confounded by usability issues.
  • If it is unpleasant to use, its uptake will be blunted.
  • If it is not visually credible, confidence in its function will be blunted.
  • If it contains needless toil its uptake will be blunted.
  • If it is incomplete in the intended use cases it will seem broken.
  • If it is incomplete in its states, messages, and errors for the covered use cases it will seem broken.
  • If it is not obvious it is not usable. This is just a facet of usability that we should strive for in every delivery to live users.
  • If it is not self-explanatory it has poor usability and increases the cost of training, which is backward from what we plan to do.
  • If it is poorly-labeled it is not usable. This is just a facet of usability that we should strive for in every delivery to live users.