There are hundreds of different ways of performing user testing. In this section I'll focus on the most fundamental testing methods. And do my best to explain how it works and why these are a great place to start learning about testing. The two fundamental parts of testing are moderated vs unmoderated and guided vs unguided. You'll find a print friendly cheat sheet of these below that you can keep in a handy place for when you enter into a testing phase of a project.
Moderated testing or Live testing - You're present when the test is performed
Unmoderated / Recorded - You're not present during the test
Guided - You instruct or assist during the test
Unguided - The tester explores freely without direction
As I already mentioned above, there are hundreds of testing methods you can use. But they all boil down to the same starting point: will you tell the tester what to do, or let them explore freely? Will you observe them live, or record the session for later? Every other test and tool is simply a variation built on top of this foundation - adding structure, technology, or scale, but never changing these two fundamental choices.
Testing methods
Perfect to print out and keep next to your computer as a handy reminder!
Guided(You instruct or assist)
UnguidedExplore on their own
Moderated / Live(You're present)
Classic usability testing where you watch, ask questions, and guide.
Shadowing users in the real world or doing field studies without intervening.
Unmoderated / Recorded(You're not present)
Scripted - users follow a pre-written set of instructions.
Natural usage without any specific tasks to perform.
Setting a goal
Never perform a test without a goal. Never test just to "see how it works". A goal is something that can be defined. And "see if it works" is not something you can define. So the first thing you need to do when you start to plan for a test is to understand what you're actually testing. What measurements are we looking to get / understand? For example:
Task completion rate: At least 90% of users should be able to complete the checkout without errors.
Error frequency: Users should not encounter any critical error during onboarding.
Time on task: A first-time user should be able to create an account in under 2 minutes.
Navigation success: At least 80% of users should find the settings page within three clicks.
Retention: At least 30% of users should return within 24 hours without additional reminders.
Error recovery: If users make a mistake, they should be able to recover without external help in less than 30 seconds.
Conversion intent: After testing a flow, at least 50% of users should say they would use this product again.
Accessibility: Users navigating with only a keyboard should be able to complete the task in the same amount of steps as mouse users.
These two are a bit harder to measure but I thought I would still mention them. Consider them as "good to know about".
Comprehension: Users should be able to explain the value of the product in their own words after reading the landing page. (Hard to measure but a great test to do).
Satisfaction: Measured with a short post-task survey (like "How easy was this?" on a 1-5 scale), at least 4 out of 5 should report "easy" or "very easy." Also follow up with a second question "How could we improve this even more?" where the test person can elaborate on the grade they just gave.
Preparing for a test
A good user test doesn't start with the session itself - it starts with preparation. The more structured you are in setting things up, the easier it will be to run the test and collect meaningful insights. Think of this step as laying the foundation: define what you want to learn, choose how you'll learn it, and make sure your team and participants are ready.
Define the goal
Before anything else, clarify what success looks like. A test without a goal risks producing vague feedback that no one knows how to act on. Ask yourself: What is the goal with this test? What's the definition of success? Use the list above as inspiration when you formulate your goals for the test. Write down your goals as a headline for the test so you keep them top of mind. This is your focus for this test.
Choose the right method
Different goals call for different types of tests. It's important to align your method with your goal, rather than to use one test for everything. Also note that there's a difference between the testing method and the actual test itself. I will cover different types of tests later in this article.
Do a moderated / live test when:
Exploration: Early in the design process, when you need to uncover unknown problems.
Quick validation: When you need fast, lightweight feedback on a design.
Complex flows: When tasks require explanation or setup.
Rich insights: When you want to hear participants think aloud, ask follow-up questions, and see nonverbal reactions.
Team learning: When you want your team to observe in real time and discuss findings immediately.
Downside: Moderated tests take more time to schedule and run, and they're harder to scale. They can also introduce bias if the facilitator leads too much.
Do an unmoderated / recorded test when:
Broad validation: When you want to spot patterns across diverse users.
Simple tasks: When instructions are clear and don't require explanation.
Scale: When you want many participants and a larger dataset.
Natural behavior: When you want to see how users perform without an observer present.
Downside: You can't ask clarifying questions in the moment, and you risk losing context if a user struggles silently. Badly written tasks can lead to unusable results. You also have the risk of technical malfunctions. More than a few unmoderated tests have gone lost in history because the recording software didn't start. So double check the tech.
Do a guided test when:
Usability improvements: When you already know what flow or feature needs testing.
Benchmarking: When comparing before/after designs or competitors.
Task-focused validation: When you want to measure success rates, time on task, or error frequency.
Downside: Guided tests can miss unexpected behavior - participants might succeed at the given task but fail at something they would normally do outside the task script.
Do an unguided test when:
Discovery research: When you want to learn what users choose to do first.
Open exploration: When you want to spot surprising usage patterns.
Real-world behavior: When you want to observe the product in its natural context (field studies, analytics, session recordings).
Downside: Harder to compare participants' results, since everyone may take a different path. Produces a lot of raw data that takes longer to analyze.
One last thing to keep in mind before we move on to the next part is that the combination of moderated and unguided tests are hard to perform in a good way. Imagine you're sitting down with a test participant in a conference room, you hand them a phone or a computer, show them the application and ask them to test it out. Do whatever they want. Whatever feels natural. And then you just watch them. Does that sound like a relaxed environment to you? In order to get a good test result from a moderated unguided test, you need to truly excel at creating a relaxed environment where the testing comes naturally for the participant.
Recruit participants
Recruiting the right participants is one of the most important parts of user testing - and one of the most overlooked. Good recruitment ensures that the feedback you collect actually represents your real users. Bad recruitment can lead to misleading results, wasted time, and design decisions based on the wrong audience.
Define your target user
Start by deciding who you need to hear from. Are you testing with first-time users, power users, or a mix of both? Do you need people who have purchased from you before, or those who have never seen your product? The closer you get to your real audience, the more valuable the results will be.
How many participants do you need?
For most usability tests, five participants is enough to reveal the majority of issues. Jakob Nielsen's research shows that five users uncover around 80% of usability problems. More than that is rarely necessary for a single round - it's usually better to run multiple small rounds iteratively than one big, expensive study.
Where to find participants
Customer lists: Invite real customers through your CRM or newsletter.
On-site intercepts: Ask users on your website or app if they want to participate.
Recruitment panels: There are several third party companies or organizations that help you recruit test participants with a specific profile. This service can be very handy when you need a very specific group of testers that you may not have access to directly on your own, for small-business owners, parents of young children, or frequent travelers, for example.
Internal networks: For quick, early tests, colleagues or friends can be used - but only if they match the target user profile.
Screening & incentives
Some companies offer a small gift as a token of appreciation to the participants of the test. If you want to do that, I recommend that you don't tell the participants about it beforehand. If you announce that all participants will receive a gift card for example, you risk attracting participants who don't care about the test and that will hurt your result.
Prepare participants
Once recruited, set expectations clearly. This helps the participants relax and focus on the test in front of them.
Tell them how long the session will take.
Explain whether it's moderated or unmoderated.
Make sure they know you are testing the product, not them.
Provide any technical requirements (browser, device, internet connection).
Types of tests
There are a lot of different types of tests you can perform. Different tests make more or less sense depending on your goal for the test. If you're validating usability, a moderated session may be best. If you're optimizing visual design choices, A/B testing might be more efficient. Align your method with your goal instead of trying to use one test for everything. All these tests can be performed either monitored or unmonitored, guided or unguided. I've tried to explain these tests on a high level. You can google any one of them if you feel like you want to understand the method on a deeper level.
Interviews
The simplest form of testing is to conduct interviews with your users. This is often used in early stages when you want to understand a specific problem or if you need to collect a general understanding of how the product or a flow is perceived. Show images of your product or sit down with the product in front of you and the test person and talk about it.
Benefits: This method builds trust and can be very useful in early stages - before you dive into design sprints. Provides a general understanding of how the tester feels about the product.
A/B testing (split testing)
Two (or more) versions of a design are shown to different users, and their behavior is measured to see which performs better.
Benefits: Provides statistically measurable results. Ideal for optimizing design variations at scale (for example, button color, Call to action wording, etc).
First-click testing
Users are shown an interface and asked where they would click first to complete a task.
Benefits: Quickly reveals whether navigation and information hierarchy make sense. Strong predictor of overall task success.
Card sorting
Participants group content or features into categories that make sense to them. Can be open (they create labels) or closed (they sort into predefined categories).
Benefits: Helps design or validate information architecture. Shows how users naturally organize information.
Five-second test
Users view a design (often a landing page) for five seconds, then recall what they saw or understood.
Benefits: Measures first impressions and clarity of messaging. Useful for testing visual hierarchy and communication.
Guerrilla testing
Quick, informal testing with people in public spaces (cafes, libraries, etc.). Usually involves asking them to try a simple task.
Benefits: Fast and low-cost way to get fresh eyes on your design. Great for early-stage concepts.
Diary studies
Participants record their interactions, thoughts, and experiences with a product over a period of days or weeks.
Benefits: Provides insights into long-term usage, habits, and pain points that one-off sessions can't capture.
Accessibility test
Is it possible to use the keyboard to navigate through your product? Can you access every modal, tooltip or button? Would it work with voice assisted technology? Is it possible to understand the website through a screen reader? These tests are very valuable. This test could be split into several different tests.
Benefits: About 16% of the world's population live with some form of disability - from reduced eyesight, like color blindness, to hearing loss, motor impairments, or cognitive differences. Accessibility testing ensures these users can successfully interact with your product. The benefit isn't just ethical - it's also good business: accessible products reach a wider audience, reduce legal risk, and often improve usability for everyone, not just those with disabilities.
During the test
This section applies whenever you're present for the test - whether in person or remotely. Your role is to create space for the participant, not to lead them. If you've done your preparation well, the participant has everything they need to complete the tasks.
Keep a low profile and focus on observation. Think of it this way: you have two eyes, two ears, but only one mouth - use them in that proportion. Your job is to watch, listen, and document what happens in real time. Don't rely on memory - take notes as the session unfolds while observations are still fresh.
If the participant asks questions, answer them, but be short and sharp. Give only the information they need to continue and avoid turning it into a larger conversation. The goal is to keep the session about their experience, not yours.
After the test
When the session ends, take a few minutes to thank the participant and, if possible, debrief briefly with them. A simple "How did that feel?" or "Was anything confusing?" can reveal extra insights once the pressure of the test is over.
Right after the session is also the best time to tidy up your notes. Fill in any gaps while everything is fresh in your mind. If you are working with a team, compare observations and highlight any patterns you all noticed.
Finally, store your notes, recordings, and screenshots in a shared place where others can access them. Good documentation makes it much easier to spot trends across multiple sessions and turn findings into clear next steps.
Synthesizing results
Collecting notes is only half the job. The real value comes from making sense of them. Start by gathering all observations in one place, whether that is sticky notes on a wall or a shared digital board. Group similar findings together to identify patterns: where users got stuck, where they succeeded, and what surprised you.
Once the themes are clear, prioritize them. Focus first on problems that are severe, frequent, and critical to completing the task. It is often helpful to categorize issues as high, medium, or low impact.
Finally, translate these insights into action items. Turn each problem into a statement your team can act on, such as "Users could not find the settings page. We need to improve its visibility in the main navigation." Sharing these synthesized findings with stakeholders helps create alignment and makes sure the test results lead to concrete improvements.
Bonus: test on your own
Sometimes there's simply a need for something simpler. Something small. A quick and dirty if you will. Here's a few of those tests you can run on your own. There are loads of these tests, I'm sure you can add many more to this list. And please do. Having many tools in your toolbox is a great asset! Remember to measure everything and present your results to leadership / your team.