“Reactions are hard to fake. Feedback is hard to give.” – Jake Knapp
User—or usability—tests can help validate a prototype, find issues in complex workflows, improve the user experience, and capture user feedback.
Combined with interviews, user tests can be a very effective way to learn about your users, prospects, or customers.
There are three main ways to conduct user tests:
- Moderated user testing (in-person): Participants come in one at a time in an office, in a lab, or in a coffee shop (in the case of guerilla testing). A moderator helps facilitate the tests.
- Moderated user testing (remote): The participant and moderator are in different locations. Screen sharing software like Zoom or Skype is used to help facilitate the tests.
- Unmoderated remote: Participants are either recruited through a platform like Loop11 or UserTesting, or they’re directed there to take part in the tests. Users are given tasks by the platform, which records their use of the product. This type of tests can be done quickly, and often for a fraction of the cost of in-person testing.
It’s generally a good idea to pick the approach that best matches your budget, the location of your ideal test participants, and how often you want to do user tests. As Erika Hall says: “Don’t use expensive testing—costly in money or time—to discover what you can find out with cheap tests.”
User tests are especially effective when they’re included in an iterative design or development process. Iterative processes allow you to test more often with fewer users, making changes between each series of tests.
Organizing Moderated User Testing
In famous research, usability thought leader Jakob Nielsen demonstrated that five users could help reveal 85% of all usability issues on a site. However, until you are comfortable with your ability to recruit—and weed out wrong-fit participants—you’ll probably want to recruit at least one extra participant per series. This will ensure that you can get all the data points you need for your tests.
Your tests will last 30-60 minutes, depending on whether you interview participants before the tests or not. It’s usually best to schedule all five (or six) tests on the same day, keeping a buffer of 30-45 minutes between tests.
Pick a neutral location like a meeting room or a quiet space. Make sure that participants know exactly how to get to the facilities, and will have enough time to get there. Stressed participants make mistakes that they wouldn’t usually make.
Depending on the complexity of the tasks that you are hoping to test, you may be able to go through three or four tasks in 30 minutes. To make sure the timing works, consider doing a dry run with contacts or colleagues; people who haven’t been exposed to your product yet.
Tools like Lookback, Loop11, or ScreenFlow can be used to record both your participants’ screens—web or mobile—and their comments.
It’s often a good idea to split moderating responsibilities between a facilitator—sitting next to the participant—and a person who is taking notes in another room (or in the same room, but out of the participant’s field of view).
Code of Conduct for User Tests
It’s your responsibility to make sure that participants are comfortable. Let them adjust their seat and device, explore the desktop or homepage before the test begins, and have a leisurely read through the first task.
Explain that your goal is to learn how to improve the site or product, and that their feedback will only be used for this.
To ease the participants into the tests, begin with the simplest task. A quick success will help participants gain confidence.
Often, participants who encounter usability issues are quick to blame themselves rather than the product. Make sure participants understand that you’re testing the product, not them. There are no right or wrong answers.
As you go through the tests, make sure that you don’t react. Don’t help the participants unless there’s nothing more that can be learned from an issue.
Keep an even tone. Watch for verbal cues and body language. Ask questions to get participants to open up:
- I noticed you did ___. Can you tell me why? You can follow up on interesting behaviors observed during the tests to get a better understanding of the participant’s reasoning.
- Did you notice whether there was any other way to ___? You can ask users to understand why users chose one alternative over another.
- What do you think of ___? You can ask about specific aspects of the interface or product (icons, menus, text, etc). You’ll learn about elements that may be confusing to users.
Focus on what participants are doing, more than what they’re saying. Take note of contradictions. Keep track of task successes, failures, and partial successes.
Asking Follow Up Questions
Consider asking participants the Single Ease Question (SEQ) after they attempt each of your tasks.
After a test, consider asking clarifying questions:
- What were your overall impression of [ Product ]?
- What were the [ Best / Worst ] thing about [ Product ]? Why?
- What, if anything, surprised you about the experience?
- Why didn’t you use [ Feature ]?
- I saw you did [ Action ]. Can you tell me why?
- Did you notice that there was any other way to accomplish [ Task ]?
- How would you compare [ Product ] to [ Competitor’s Product ]?
- Can you think of any other product that resembles this one?
Compare your notes with those of the other note-taker. Create a list of issues. Rank them by importance and frequency. Any issues that caused task failures should be addressed as quickly as possible.
Alternatively, if you weren’t testing your own product, then any problems that you have identified could point to opportunities to improve over the competition.
– –
This post in an excerpt from Solving Product. If you enjoyed the content, you'll love the new book. You can download the first 3 chapters here →.