Here’s a recent talk I’ve given regarding why we pay attention to accessibility, and how the accessibility-related practices at Cayuse have changed in the past year.
Just as our customers have a responsibility to provide accessible tools to their employees, Cayuse has a responsibility to deliver accessible software to our customers. In this session, we’ll discuss how accessibility practice is changing at Cayuse, the accessibility status of our products, and what we’re doing to continuously improve.
I used to have a hiring process that carefully ensured every team member had a role in questioning, evaluating, and discussing each candidate. My employees loved it, but candidates suffered through it and it made us very slow to make decisions. I worked on tightening up the process (and especially its planning and logistics) so we could get through all of the various meetings in a half-day or so. That helped a little bit. Then I tackled improving how we made the yea/nay decision on a candidate, aiming to make a decision the evening of the interview day. We picked up the pace a bit more, but still chewed through a lot of candidates looking for one that would get a thumbs-up from enough people. We were still slow, rejecting too many fine folks that could have helped us. And we wasted a lot of employee time doing so, within each individual candidate’s process as well as across candidates.
There were some nice points to this process, namely that we
became a lot clearer on the criteria we used to score candidate portfolios
carefully divided interview responsibilities to reduce duplication of questions and topics
became a lot clearer on the few criteria we used
developed a standard scoring scale for each criterion: no/near/yes/plus, where we were hoping for “yes”es but a person who we liked who had a few “near”s might be offered an opportunity to move them to “yes”
Then I changed jobs to a firm that was much smaller and could not afford a lengthy process, even if that process was contained within a day or two. Even if that process in its rigor strove to protect the firm from mis-hires. We could not involve the whole team. We could not take hours out of everyone’s schedule to accomplish an interview, much less a hire. And we had to be even more crisp with how we evaluated people to set aside the inappropriate candidates quickly but not deny a good person an opportunity to shine.
I also realized, when putting this new process together, that part of the stress for candidates is in the gaps between activities – not knowing what to expect, not knowing if you are succeeding or failing, not knowing how to prepare. I strive to reduce these stressors in my hiring process.
I’m also conscious of the work that a candidate is tempted to do to prepare for an interview – polishing their portfolio, doing design exercises, practicing presenting their work. These are all barriers to folks that might be amazing employees but due to other constraints (such as caring for a child or an aging parent, economic pressures, attending school) don’t have the time outside of work to polish a portfolio. I don’t care about the ability to spend time outside of work preparing, I care about how a person thinks about the work and how that thought appears in the activities they choose and work they produce. I don’t need a person to do a bunch of unpaid homework to prove these things to me. I do need them to be able to talk coherently and in some detail about the work they did, how they integrated with the team, what they chose to do and why, how their contributions helped.
The result:
I evaluate a candidate’s resume and portfolio. I have a little checklist of characteristics/experience/capabilities to watch for. If a person checks most of the boxes I invite them to a screening call. Boxes that aren’t checked are noted for steps two and three. (So far I’ve had success evaluating the “portfolios” of designers, writers, researchers, really anyone that can provide work samples of some kind.)
I have a well-planned half-hour screening call, kept strictly to the half-hour, including a brief explanation of the entire process and what to expect timing- and communication-wise. This explanation demonstrates empathy and helps put the candidate at ease. I explain the job, the company, the team, some of our current challenges, then ask the candidate about what of their experience they find relevant. This gives them an opportunity to learn about the role, decide if it seems exciting or uninteresting, and shape their story. At the end of the call I let the person know if I’d like to talk to them further. In most cases we can schedule our next meeting right then. If not, I explain when they can hear from me.
We have a 90 minute interview mostly organized around portfolio review. I bring one team member to this meeting. Rather than try to go over everything in a candidate’s portfolio we ask them just to bring two projects that they are especially proud of that seem relevant in some way to what we discussed in the screening call. I counsel them not to make anything new or fancy for this meeting. They don’t need a polished portfolio, they just need real examples of work they’ve done, in whatever stage of completeness they have. As mentioned above I’m looking for evidence that they actually did the work and their thought processes around the work. This meeting ends with us telling the candidate exactly when they can expect to hear from us. That moment is never more than a day or two away, allowing for weekends.
The team member and I have a quick huddle where we go over our scoring and decide whether or not we’d like to make an offer, and at what level. I immediately work with whatever people at the firm are necessary to get an offer prepared quickly, if needed, and we rapidly communicate a verbal offer or non-acceptance to the candidate within the promised timeframe.
This process lowers barriers for the candidate, saves us and the candidate needless anxiety and extra work, is quick yet rigorous, and of the many people I’ve screened/interviewed/hired, only one turned out to be a mis-hire so far.
I got laid off today. That’s not a win, surely. But it set off a few:
The moment I announced I am #openToWork on LinkedIn there was an inrush of good wishes and referrals and recommendations.
The surviving and departing members of my team spontaneously gathered to provide mutual support. Their comments to each other and to me are really heartening.
I remembered to create an alumni Slack, and some people who left prior to this layoff are showing up. As with the flameout of my previous company, there are good people who formed good relationships and are happy to band together.
You might call these “understand” and “design,” or “investigate” and “generate,” or whatever. It’s the same. There’s nothing fancy or proprietary about any of this. But you have to do both. If you learn and don’t make you are more knowledgeable but don’t have a solution to the problem. If you make and don’t learn you have a thing, but an indefensible one, with no way of knowing if it is any good.
You might learn to make making easier, or you might have to make some things in order to facilitate learning. You might alternate between learning and making. You might learn, make, make, learn, make, etc. But you’ll do both. Sometimes making just captures what you’ve learned in a form you can use later.
Yeah, this is basic stuff. Baby steps. Even so, some teams fail at this level.
Three “horizons”
I like to think of product delivery, and thus the work of research and design, in three segments or horizons.
Horizon one is the horizon closest to whatever we are hoping to ship soon. I call this the “detail” horizon. It involves creating workflows, selecting and laying out controls, defining behavior, formative usability testing, implementation support, and instrumentation. Most teams seem to do most of their work in horizon one. That’s an anti-pattern; without adequate attention to the other horizons, work in horizon one is poorly-supported, organizations aren’t learning, the designs are accidentally good at best and indefensible at worst.
Horizon three is the farthest out. In the third horizon we’re looking for problems we might solve for our existing or prospective customers, and proposing benefits that will address those problems. Call it the “benefit” horizon. Most teams leave this horizon to product managers and executives, though they shouldn’t. Executives especially can fall into the trap of leaping from horizon three to horizon one, or even shortchanging horizon three in an effort to get to horizon one quickly. Horizon three is learn-heavy, but there’s making in proposing benefits that the team may choose to deliver.
Between horizons one and three is horizon two. This is where we bridge the gap between a benefit and a specific design. I call this the “concept” horizon because there are many ways to deliver a given benefit, and we need to figure out some concepts and choose among them before getting into the details. I’ve witnessed very few teams that explicitly work in this horizon, and the quality of their delivery suffers.
I’m still chewing on this
One quibble I have with this “horizons” idea is that they aren’t numbered sequentially; for a given product you work in horizon three, then horizon two, then horizon one. But naming them works and provides a handy mnemonic for the overall workflow: “benefit, concept, detail.” That might be more powerful than speaking in horizons.
Surely there’s more
Yep. These are just the bones. Flesh yet to be written about:
Specific learn and make activities in horizons one, two, and three
How product, UX, and engineering should interact in each horizon
Double diamond, single diamond, how many diamonds and how large
Scaling the process up or down to be responsible the task at hand
Other philosophical points important to this process
I’m using a process initiative at work to demonstrate to the company, especially to folks not on the product team, a basic UX research and design process. The main idea is to short-circuit the all-too-common impulse to leap from an identified problem or need to one seemingly obvious solution. Executives are famous for this, but it’s common in other parts of the company as well. Executives are also famous for mixing generation and evaluation, which should be held apart for a while.
Going hard at the gym has started (at long last) to result in less or even no knee pain at night, leading to better sleep. Hallelujah.
I’m getting closer to a unified field theory of UX research and design. Watch this space for stabs at explaining parts of it at various levels. The first bullet above holds one fragment.
I’m not that worried about it. The number of jobs an employee has is not my concern.
If the person is getting their work done and meeting my performance/communication/availability standards, I don’t have a problem.
If they are not getting their work done and/or not meeting my performance/communication/availability standards, I do have a problem.
In neither case do I need to cast about for proof that the person is working another job or two, or distracted by caring for an elder parent, or going through a messy breakup, or what have you. I gain nothing by investigating each employee to see if they might be working another job. If I determine that there’s a performance problem, I need to talk to the employee and manage the issue.
No one seems to care if an employee in the C-suite serves on multiple boards (unless they are competitors) or advises multiple startups (unless they are competitors). No one seems to care if an employee also plays in a band. No one seems to care if a person sells their ceramics or tunes pianos or works on software projects on the side for free; perhaps it’s conflict of interest we’re worried about?
If there’s no performance problem and no conflict of interest, is there a problem?
My two conference talks went fine. It’s not easy to gauge audience response when using some of these online conference platforms, so I’ll reserve further judgement until the survey results come in.
One of my online participatory design exercises went fine (the other was marred by low participation). These are hard to do online, but better than nothing, and we learned some things even so.
I tried doing a pull-up on a pair of cannonball grips, and surprised myself by succeeding without much trouble. I haven’t trained pull-ups for years. Maybe I should. (Word is you should “Spock” your grip on these, with two fingers on either side of the bracket at the top.)
Here’s a quick one that has saved me time and helped me mask my frustration:
When dealing with a support agent via online chat, keep a copy of everything you type in a separate text file. That way when you think of a better way to explain yourself, or inevitably have to re-explain everything to another agent, you have text at the ready. It’s much less work to copy and paste the relevant bits than to write them all again, is less emotionally taxing, and you can look like the nice person you really are, even if this all has gone on far too long.
In early October I was asked to give an eight-to-ten-minute presentation summing up the year for UX. A tall order, but I embraced blazing through the content to alight briefly on things I though the general company audience should know about UX and how we were trying to help.
With most of our work going toward rewrites that have not yet launched there’s little to say about outcomes, so I followed a rapid tour of our outputs with a couple of quick demos and an invitation to view my upcoming Cayuse Connect Conference talk about UX philosophy and practice.
My eight minute talk wherein I tried to explain/demo the accomplishments of my department over the prior year seemed to go fine. Folks in my department liked it, anyhow.
The gym was closed on Tuesday. No matter – I went for a run. And it was somewhat enjoyable.
A former coworker sent me a very nice message. It made my week!
I hope you’re doing well! It’s been quite a while since I last spoke to you, and I wanted to say how much I appreciate the impact you have had on my life. I reap the benefits of it almost daily, and I hope to be like you when I grow up.