How to spot low-quality UX research participants (and what to do about them)
By Abby Quillen●6 min. read●Mar 19, 2026

UX research is only as good as the participants behind it. Unfortunately, recruitment remains a top challenge. In the 2025 State of User Research Survey, 54% of researchers reported facing challenges with participant quality and reliability.
In this article, you’ll learn how to identify low-quality participants before, during, and after your study so you can screen them out before they hurt your data and credibility.
What does “low-quality” participant actually mean?
Most researchers have experienced the sinking feeling of realizing a participant is a bad match midway through a study. To identify low-quality participants, it’s important to know what you’re looking for, especially because low quality isn’t a universal label. A perfect participant in one study may be a poor fit for another.
Look out for these three types of low-quality participants.
Inattentive participants
One of the most common types of low-quality participants. These people are technically eligible for your study, but they aren’t genuinely engaged in it. They’re not maliciously trying to game the system, but they’re also not trying very hard to answer questions authentically. They may speed through your survey or provide vague one-word answers.
Professional survey takers
Professional survey takers participate in studies for rewards rather than genuine interest. They’ve learned to respond to screening questions and might misrepresent their titles or product usage to qualify, leading to inauthentic answers.
Fraudulent participants
Fraudulent participants undertake even more sophisticated measures to deceive. They may misrepresent their identities, submit duplicate entries under new email addresses or devices, or use virtual private networks (VPNs) to appear eligible for studies.
How low-quality participants differ in B2C vs. B2B research
Depending on the type of research you conduct, you’ll likely encounter different types of low-quality participants. Studies that rely on large consumer panels, which are often used in B2C research, typically attract more professional survey takers because these panels are easier to access and often allow participants to join multiple panels at once.
B2B studies often have stricter eligibility requirements. For example, you may be looking for participants based on their role in using a product, such as admins who configure a tool versus end users who rely on it day-to-day. For that reason, you may encounter more fraudulent participants who misrepresent their roles or overstate their product knowledge to qualify. Because B2B studies typically offer higher incentives, they’re particularly attractive to bad actors attempting to bypass screeners.
Why participant quality matters
Your data is only as good as your participants' responses. Unfortunately, low-quality responses are more common as online research dominates. One study found a dramatic decline in usable responses from online surveys from 75% to 10% in recent years. The decline makes sense because it’s harder to verify identity online, and participants can now use generative AI to create plausible-sounding responses.
The risks extend beyond just data quality. No company wants to waste research budget on unhelpful responses. Moreover, low-quality research drives poor product decisions. If you don’t have accurate insights into what your customers need, your business may end up on misaligned roadmaps or misguided feature decisions. In one study, nearly a third (34%) of buyer-side insights professionals had observed poor business decisions related to the quality of research samples.
Poor research quality also puts the research team's internal credibility at risk, making it harder to influence product decisions and maintain stakeholder trust.
Common red flags to watch for
Ready to recruit the right participants for your study and weed out low-quality ones? Here are warning signs to look for at every stage of your research.
Note that some red flags can appear at more than one stage. For instance, a participant who rushes through the screener survey may also rush through the main survey.
Before the study
Your screener survey is your first line of defense. It’s your chance to make sure participants are eligible based on your criteria, and it’s important to carefully structure your questions. This stage is the best time to catch low-quality participants before you invest more resources into them, so look closely for these potentially deceptive responses and behaviors.
Inconsistent, unrealistic, or contradictory responses: A young participant who says they have multiple decades of experience with a product may be checked out or dishonest. The same goes for a user who claims to use different solutions when asked the same question multiple times.
Overly polished answers: Vague, scripted, too perfect, or rehearsed answers may not be genuine. Also, pay attention if a participant struggles to share concrete details or timelines for open-ended questions.
Suspicious participation history: Be wary of participants who have taken part in many similar studies, or have duplicate profiles or generic email addresses. They may be professional survey-takers.
During the study
Some low-quality participants will inevitably get past the screener survey. The study itself is your next opportunity to catch them.
Contradictions: A participant may claim to use a product daily during the screener, but then can’t explain how it works during the study.
Rushing: Watch for participants who don't read instructions, attempt to end the survey early, or clock a very short completion time.
Straightlining: An inattentive participant may repeatedly select the same answer for a series of questions, rather than considering each one individually. This is most common in rating scale questions.
Camera avoidance: Keeping the camera off is not always a red flag, but it can signal deception or disengagement in some cases. Pay even closer attention to a participant’s answers if they have their camera turned off.
Surface-level answers: Bare-minimum answers may indicate a person hasn’t actually used the product. These answers may include:
“It was fine.”
“I liked it.”
“It worked well.”
Generic or rehearsed answers can also indicate deception.
Be careful, though, because surface-level responses can also reflect poor research design. Make sure your questions prompt users to talk through processes and encourage concrete examples.
After the study
While it’s better to catch low-quality participants earlier in the study, identifying them afterward can still save your data and credibility. Analyze your data for these patterns.
Identical answers: You may see a large number of very similar answers or little meaningful variation across the dataset.
Outlying response times: A few participants may have completed the study much faster or slower than everyone else.
Questionable results: Your results may look too clean, or they may contradict previous research.
How to screen out low-quality participants
The bottom line? Low-quality participants are common, and you’ll likely need to screen them out in every study. In one analysis of four online surveys, researchers who employed fraud detection strategies removed between 16% and 45% of respondents.
Here’s how to catch bad actors during every stage of the research process.
Before the survey
Your screener survey can help you get higher-quality data from your participants and weed out low-quality participants earlier in the process. Use these tips to find low-quality participants during this stage:
Design your screener survey to encourage genuine reflection: Word your screener survey to find users who are actually familiar with the product. It should ask about product usage frequency and include at least one open-ended question about a specific situation where the participant used the product.
Let survey takers know what to expect: Even the best-intentioned participants can zone out if a study is unexpectedly long. To increase engagement, clearly state the anticipated time required for the study and include a progress bar on every screen.
Use attention checks and red herring questions: These questions are easy to answer for anyone paying attention, but they can trip up disengaged participants. Attention checks should have clear, correct answers that require focus rather than memory or expertise. A simple attention check is: “Answer ‘Strongly Agree’ for this question to show you’re paying attention.” Also include a few obviously wrong answers, or red herrings, in multiple-choice answers to flag inattentive respondents.
Include consistency checks: Ask a few questions in two different ways. For example, in one part of the survey, you could ask participants to share their top three product features. In another, you could ask them to rank a list of features.
During the survey
Once responses roll in, conduct real-time checks to help you spot low-quality participants.
Monitor completion times: Participants can naturally vary in how much time it takes to complete a survey. But when you notice divergent completion times, take a closer look at that respondent’s answers.
Point out contradictions: Keep track of inconsistencies in a participant’s data. For example, if they say they’re overall very satisfied with a tool, but rate it negatively across every individual feature, it could be a sign they’re not paying attention.
Flag unnatural fluency or over-rehearsed responses: If a response doesn’t sound like how a person would normally talk or write, it may not be authentic. Up to a third of online survey takers have used generative AI tools to answer questions. To spot AI-generated answers, look for longer, overly nice answers written in unusually formal language.
Probe questionable answers in follow-up interviews: Use one-on-one interviews to clarify surface-level answers, explore contradictions, or test overly polished or generic responses. Ask concrete, open-ended follow-up questions that encourage participants to share real experiences, such as:
“Can you give me a specific example?”
“In what way was it confusing?”
“Can you walk me step by step through where you got stuck?”
After the survey
Screening doesn’t end when the last response comes in. Now it’s time for a deliberate review process to protect the integrity of your insights.
Follow these tips to eliminate any low-quality participants that may have slipped through previous stages.
Define what qualifies for exclusion: Before removing any data, define your exclusion rules, and be specific. For example, you may want to remove a participant who fails an attention check, has a completion time of less than five minutes, and submits at least one questionable answer.
Separate unexpected from invalid results: Not every surprising response is bad data. Outliers can reveal edge cases or overlooked segments. For example, a user might share challenges with a generally well-liked feature that they're using for a novel use case.
Prioritize integrity over volume: Clean data is more valuable than a larger sample, so err on the side of removing questionable data even if it means you discard a larger percentage of your responses than you would like.
Use an incentive platform with fraud prevention capabilities: Incentivizing participants the right way can help you recruit higher-quality participants while encouraging them to stay engaged. Look for an incentive platform with robust fraud controls that check for reused IP addresses, participant country mismatches, and other fraud signals. These incentive-level fraud checks provide a final layer of security to help you catch bad actors before you send out incentives.
Final thoughts
Low-quality participants are an unavoidable reality of UX research, but they don't have to compromise your results. By knowing what to look for and building quality checks into every stage of your research process, you can protect the integrity of your data and focus time and budget on your most valuable users.


