April 16, 2018 JK Burke

User Testing – The Ultimate Audition

As our company’s philosophy is dedicated to #lovetheexperience, we often find ourselves investing in user testing and listening closely to rounds and rounds of feedback. User testing is crucial for a number of reasons, but many times the most important insights are overlooked and left out of final recommendations. These insights may not directly relate to a specific type of test question (e.g., call-to-action or like/dislike question), but rather the reactions and sentiments shared by users as they journey through the site or application.

At Consensus, we are a firm believer that user testing isn’t solely about improving user flows or finding usability flaws, but rather it’s an opportunity to put your product, business goals and even your brand in front of a live audience to see how it performs under different scenarios. In fact, let’s do away with calling it “User Testing” or “Usability Testing” – it under values the possibilities of what a company can learn. We need to be bolder and call it an Experience Audition!

Now that we have rebranded user testing, let’s look at a couple ways to improve user tests and debunk a few myths along the way. Starting with an “expert” review or what some people may call a “heuristic” review, (reminder: this blog is about user testing). While we don’t want to steer anyone away from heuristic reviews because they are cost effective, timely and certainly provide benefits, but what don’t they do?

User Testing vs. Expert Reviews
Expert reviews are useful and it’s something we frequently do for clients. Often, clients assume we will derive the same findings as user testing or the findings of a heuristic review will provide a true, “aha” moment. Now, none of this is wrong, but it’s important to understand true user testing benefits as part of the “audition”.

Typically, a heuristic review is a great benchmark/rule-of-thumb for finding flaws across a number of areas – navigation, presentation, trust value, mistakes in general UX principles, hierarchy redundancies, and so on – in a lot of ways it’s like having a teacher grade a project. The site or application design will be put against a rubric and graded. You will see where you were deficient and gain insights on how to improve in specific areas.

As many may remember from their lecture hall days, academics seem to view problems through a specific lens, one that doesn’t always account for all real world scenarios. And, when in pursuit of the optimum user experience – it’s the ability to put human in digital that will provide results. Expert reviews are more cost effective and less time consuming, but it leaves you vulnerable to missing valuable data that will help you create an authentic experience.

User tests open up a whole new avenue of both quantitative and qualitative data. The “Live Audition” reference above, is not far off from how we view our own user tests. The data becomes even more valuable when you start asking the questions the right way – an issue we frequently see is our clients requesting CTA’s that push a user a certain direction or encourage feedback that will fit an agenda. This, for many reasons, is something we want to avoid. But let’s look at some general guidelines for question structure as this is how you’ll gain the best benefits over a heuristic review.

  • Avoid industry related jargon when possible – It should be a goal to make the user feel that they are in a comfortable setting. A question that confuses the user or makes them over analyze their process will provide data reflective of that particular experience.
  • Should you have complicated questions that may confuse a user, break the question into a multistep process – keeping it simple can be the most difficult part.
  • Avoid hypotheticals – try to encourage first hand experiences. Forcing a user to contemplate the steps they would “probably” take to accomplish a goal can produce clouded answers.
  • When responding to a comment DO NOT give the user the feeling they are being judged in anyway – Any answer is a useful answer. If a user cannot find the proper steps to add a friend, then ask for more insights. It is important to find what is causing their confusion.
  • Offer neutral Like/Dislike based questions – This is a great way to discover subjective information about your design – and as you’ll see below, success is in the questions you ask and not how many people you ask them to.

User testing – How many users do I need?

The LARGEST misconception seems to be how many users are needed to compile a reliable data set. People assume the larger the user group, the better the results. When in fact, a group over 20 users is really bordering on a statistical analysis of user tendencies – you won’t discover much new after user 5, and this could even be more than sufficient.

The best results come from testing no more than 5-15 users and running as many small tests as your budget allows. Once we get into the 20+ user range, it becomes more of a statistical analysis of user tendencies. Which, in its own right might be useful, but as you’ll see below, we stop finding new usability issues after just a few users.

Tom Landauer and Jakob Nielsen (whose formulas below and graph above) shows that the number of usability problems found in a usability test with n users is:

N (1-(1- L ) n)

where N is the total number of usability problems in the design and L is the proportion of usability problems discovered while testing a single user. The typical value of L is 31%, averaged across a large number of projects. Plotting the curve for L =31% gives the results on the right:

The most striking truth of the curve is that as soon as you collect data from a single test user, your insights increase and you have already learned almost a third of all there is to know about the usability of the design. The difference between zero and even a little bit of data is astounding.

When you conduct testing on the second user, you will inevitably notice some overlap with the first tester. You should, however, receive some new feedback from the second test case, creating some differentiation for you.

The third user will very likely repeat many actions you ‘ve already observed with the first two testers, but will also provide some unique data for your study.

The curve clearly demonstrates that you need to test with ~15 users to discover all the usability problems in the design. However, the value of testing with a much smaller number of users better distributes budgets for user testing across many small tests, instead of exhausting a budget on a single, elaborate study.

Only note information related to your CTA’s or, even worse, only note the Negatives

Taking a look at the last User Testing session we administered, this was incredibly important and was the source of some of our most valuable discoveries! We had even left 3 questions open specifically for qualitative Like/Dislike comments and still – when asking users to accomplish a goal, found more from their opinions than their actual navigational paths.

When administering a user test, it is easy to fall in the routine of noting a few taps, maybe a swipe and then moving on. Sometimes being more careful of the questions you’re asking than the answers you’re receiving. What is often overlooked with user testing is the fact that you are gaining first hand views of what your everyday user will be going through. Remember the formula above? That’s scalable all the way up.

This is everything from the usability, to the users Emotional Response, and even how your new project aligns with your overall business goals and brand.

As your user progresses through the various tasks, encourage them to share their thoughts on different aspects they notice. You’re already engaged in a 30-40 minute session and this is a great time to learn what you did well.
Are users visibly excited when the app opens? Or is a rebranded feature not intuitive for the new user? View the results from a multitude of lenses, this will help give you the best understanding of how your user truly feels.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close