UX Recruiting Practices
This doc is gathering some info on best practices around recruiting and testing participants. Different methods require different approaches and we should tailor our execution accordingly.
🕵️ What we want from User Testing
The three main rules for simplified user testing are:
1. Get representative users
2. Ask them to perform representative tasks with the design
3. Shut up and let the users do the talking.
The main obstacle to quick and frequent user testing is the difficulty of finding warm bodies that satisfy rule #1. Most companies have no procedures for getting five users to show up at specified times next Wednesday, and yet that’s what is required for a successful usability study. - NNG
💸 Incentivization ← this is a real word
In most internal studies ( 90% ) participants are not paid.
Which is in contrast with most external studies where the participants are paid.
Participants recruited from the outside most often received cash as their incentive for coming to the test. 63% of external users received monetary compensation, 41% received non-monetary incentives, and 9% didn’t get anything. (The numbers total more than 100% because a lucky 13% of external users were given both monetary and non-monetary incentives.) - NNG
The avg. incentive paid was $64 . per hour of test time. This hinges on high-level professionals ( avg. $118 ) and non-professionals ( $32 ). Students however were paid a lower avg. ( $18 ).
ℹ️ Setting Some Criteria For Us
- Job title and job description
- Do they use an online ordering system
- Frequency of use ( ie. infrequent, often, daily )
- Prior experience (with specific products, competitive, related, and predecessor)
- What is the scenario in which they order ( ie. dinner, pickup, delivery )
🤖 Getting Systematic
- create a cadence for reaching out to your users
- plan a testing day for the same time every month you want to do it or perhaps at the beginning or end of the quarter
🤼 Number of Participants
Do you need statistical significance?
- In order to achieve real statistical significance, plan to evaluate with 10 to 12 participants per condition.
- If a study is underpowered, it might be statistically inconclusive
Are you just testing usability?
- Plan to evaluate with at least 4 to 5 participants
- there are diminishing returns from usability testing any given design. The first few users find almost all the major usability problems, and you learn less from each subsequent study participant.
- around 80% of the usability problems represented by the selected tasks are found after testing four users.
- because you will learn very little by repeatedly testing the same user interface. It is much better to stop after a short test, revise the design, and evaluate again.
Do you have multiple designs?
- Evaluations using multiple usability methods require more participants to avoid using the same ones for more than one study within a short period of time.