Fast Path to a Great UX: Increased Exposure Hours
As we've been researching what design teams need to do to create great user experiences, we've stumbled across an interesting finding. It's the closest thing we've found to a silver bullet when it comes to reliably improving the designs teams produce. This solution is so simple that we didn't believe it at first. After all, if it was this easy, why isn't everyone already doing it?
To make sure, we've spent the last few years working directly with teams, showing them what we found and helping them do it themselves. By golly, it actually worked. We were stunned.
The solution? Exposure hours. The number of hours each team member is exposed directly to real users interacting with the team's designs or the team's competitor's designs. There is a direct correlation between this exposure and the improvements we see in the designs that team produces.
It Makes Perfect Sense: Watch Your Users
For more than 20 years, we've known that teams spending time watching users, can see improvements. Yet we still see many teams with regular user research programs that produce complicated, unusable products. We couldn't understand why, until now.
Each team member has to be exposed directly to the users themselves. Teams that have dedicated user research professionals, who watch the users, then in turn, report the results through documents or videos, don't deliver the same benefits. It's from the direct exposure to the users that we see the improvements in the design.
Over the years, there has been plenty of debate over how many participants are enough for a study. It turns out we were looking in the wrong direction. When you focus on the hours of exposure, the number of participants disappears as an important discussion. We found 2 hours of direct exposure with one participant could be as valuable (if not more valuable) than eight participants at 15-minutes each. The two hours with that one participant, seeing the detailed subtleties and nuances of their interactions with the design, can drive a tremendous amount of actionable value to the team, when done well.
First Forays: Field Visits
As we watched different teams go through this process, we started to notice some repeatable patterns. For example, many teams spent little time watching their users. Often these teams had successful, profitable products that had evolved over many years into very complicated designs, chock full of features that users found hard to find and often frustrating to use.
Before they began watching users, the teams would frequently find themselves at odds in meetings. They knew that the product was getting more complex, but nobody had any real information about how the product was being used. Stakeholders would ask for features without giving any useful details to the team to implement. An attitude of "Let's build it, and if we get it wrong, we'll fix it" would prevail.
For teams like these, we often choose a field visit as their first foray into watching their users. Field visits are great because we get to see what the users do in their natural environment. It doesn't require prior knowledge of what the proper tasks in the design are. We interview the user, uncover their goals and objectives, and then ask them to use the product or service to accomplish those.
A typical field visit is two hours. Usually with ten to twelve visits, each team member can get at least eight hours of exposure to a minimum of four different users, each trying to use the design in interesting ways.
The results are typically a list of easy fixes. One recent 12-visit venture with a 10-member team produced 350 items on their list of quick fixes. The product improvements started showing up in just a matter of weeks.
A Minimum of Every Six Weeks
We saw many teams that conducted a study once a year or even less. These teams struggled virtually the same as teams who didn't do any research at all. Their designs became more complex and their users reported more frustration as they kept adding new features and capabilities.
The teams with the best results were those that kept up the research on an ongoing basis. It seems that six weeks was the bare minimum for a two-hour exposure dose. The teams with members who spent the minimum of two hours every six weeks saw far greater improvements to their design's user experience than teams who didn't meet the minimum. And teams with more frequent exposure, say two-hours every three weeks, saw even better results.
We think there are two reasons the frequency turns out to be important. First is the way memory works. It's harder to remember someone you've met more than six weeks ago than someone you've met last week. If we want our users and their needs to be present in our minds as we're creating our designs, we need to regularly see them.
The second reason has to do with the pain of an ongoing frustration. It's painful to watch someone struggle with your design. It's even more painful to come back a few weeks later and see someone else struggle with the same problem again. The more times we're exposed to those struggles, the more frustrated we get, the more we want to fix those problems. (And the happier we'll be when we finally see someone who breezes right through with our new design.)
Some problems are particularly gnarly. Seeing these problems repeat, in the field and in the lab, gives us insights into the nuances behind their potential causes. Testing out new design ideas can help us get to a solution faster. A regular exposure program makes that happen even better.
By having a six-week minimum to our exposure, we leverage these two factors, making our users and their needs the driver of the design work we're doing on any given day.
Types of Exposure To Users
Field visits aren't the only form of exposure we found that works. Usability tests, both in-person and remote, can be very effective. (We found a mixture of both works better than 100% remote sessions.) Once you know the tasks that users naturally use with the design (because you discovered them during your field visits), it's easy to construct realistic scenarios for usability testing.
For folks heavily involved with a style of self-design, using it themselves for real work also can contribute. (For more about self-design, see my recent article, Actually, You Might Be Your User.) Again, validating these results with other methods, such as field visits and usability testing, helps you understand what your users experience that you don't when using the design.
Watching users work with competitive designs also is important. Seeing them work through those same tasks with someone else's design can help identify where there are gaps in your own design. It also makes it easy to point out where your advantages lie.
The Team of Influencers
Our research had a finding that took us by surprise: Teams that excluded non-design personnel didn't see the same advantages as teams that included those people.
For example, we worked with teams where only the designers and developers were having regular exposure to their users. Stakeholders, such as product managers and executives, along with other non-design folks, like technical support liaisons and quality assurance management, didn't participate in the field studies or usability tests. While the core design team became very familiar with what users needed and wanted, they were constantly battling with these other individuals who didn't have the same experiences.
The tipping point came when we found teams where all these other folks were participating in the user research studies. No longer did they assert their own opinions of the design direction above what the research findings were telling the teams. Having the execs, stakeholders, and other non-design folks part of the exposure program produced a more user-focused process overall.
Exposure is easy to measure. You can just count the hours everyone has had participating in the studies. We're seeing teams make it part of their quarterly performance reviews, sending a clear message of the importance of user experience, especially when all the influencers are measured the same way.
The Challenge: Two Hours Every Six Weeks For Everyone
Granted, we admit our data could be flawed. There could be other factors here. However, we've tested every possible theory, spent time reviewing every factor we could imagine, and we keep coming back to this one item: Get every member on the team to spend two hours every six weeks and you'll likely have a great user experience appear before your very eyes.
Originally published on UIE.com.
Karen McGrane and I will be interviewing 14 senior UX executives on how they've gave their companies a competitive advantage with user experience. Join us in Baltimore, August 18 & 19 at the UX Advantage Conference.
I've said for 20+ years that there's nothing quite like watching someone use your product, while thinking aloud of course. It's so much more effective for people to watch for themselves than to get a summary report. It's frustrating when developers or other non-designers say they don't have time to observe real users, when I know their level of empathy will be totally different by seeing 'the struggle is real'. Too many developers feel if they aren't coding, then they aren't doing something useful. On the plus side, I've had developers modify the code on the fly during the user session they were observing, and were able to retest right away. User exposure of 2 hours every 6 weeks is a good guideline. The continued exposure to real people motivates the fixing of problems, or possibly even figuring out how to more complete revamp. Having debriefs is a good way to share what different people observed. We often use the technique of top 3 observations from each person to quickly share. By observing and discussing, the team has a better chance of collaborative identifying the best ideas for improvement. I'll continue to repeat "There's nothing like watching someone use your product." I've seen too many teams do it too infrequently. And teams where just designers doing the observation vs involving others. Even after years of experience running these sessions and good guesses as to what I think the user will do or say, there's always a few surprises. If you haven't watched someone use your product in the last 6 weeks, do it!
Design Manager at Zalando SE Loyalty Experiences
9yI wonder if these number change by the phases of the project and/or size of the company.
Senior UX Designer at Trimble
9yGreat Post Jared! You are totally right, observing users can help the team discover tons of things they are doing right or not doing. I agree with you that this exercise should involve everyone on the team!
Thanks Jared, I've been recommending this since I spent much of four years doing contextual research. Where test-room exposure is of enormous value, I have had the even greater luxury of working on large corporate intranets and, in my earlier years in HCI I spent hundreds of hours observing end-users with enterprise applications. In each case, the relationship with the end-users supported access during working hours. This provides an opportunity to observe the target group using the design to complete their own real tasks. I was able to interview them directly when I observed sources of confusion and frustration. Wherever this can be arranged, it is even stronger than observing synthesized lab testing. Crucially, working with users' own real tasks affords insights where the source of frustration arises outside the application. These seldom arise in the lab.
Transforming Customer Service Journeys with Strategic Experience Design and Product Leadership | Empowering UX/CX Innovations | ex- SIXT, BCG, Barnes & Noble Education, CeX Ltd., TCS
9yGreat Post Jared!. its more useful for us. But here are most difficult to convince stack-holder.