New CRM integrations: Salesforce & HubSpot

Get setup guide

Participant experience in UX research: How to measure and improve it

By Kathryn Casna7 min. readFeb 28, 2026

An illustration of a gauge of patient happiness.

Participant experience can make or break a UX research study. You may have even dealt with the symptoms of a bad one, such as no-shows and disengaged participants. But when people feel respected and valued throughout the process, your study has a better chance of running smoothly. 

Our recent UX research survey confirms what many already sense intuitively: participant experience is the top factor when choosing an incentive platform, ranking above price, global coverage, and ease of administration. The experience you create for participants shapes the quality of everything downstream.

But how do you actually measure whether you're creating a good experience, and where do you start improving it? You're about to find out.

Why participant experience is a top priority for UX teams right now

UX researchers are dealing with compounding pressure. Our survey shows the three biggest challenges are:

These aren't separate problems. They're connected, and participant experience sits at the intersection.

When participants have a smooth, respectful experience, they show up when they say they will. They engage fully with tasks and questions. They refer colleagues. 

On the other hand, long, tedious research sessions decrease engagement and thoughtful responses. When the experience is poor — confusing screeners, vague instructions, slow or complicated payouts — you get no-shows, disengaged responses, and a shrinking participant pool.

Meanwhile, 30% of researchers report that AI and bots are actively degrading data quality. That makes genuine, engaged human participants more valuable than ever. Protecting that pool starts with treating participants well from first contact to final payout.

What "participant experience" means in UX research

Participant experience is the journey a person takes from the moment they're invited to a study to the moment they receive their incentive. It covers every touchpoint — and every friction point.

A quick look at each stage:

  • Invite and screening: Does the invitation clearly explain the study, time commitment, and reward? Is the screener survey respectful of their time and straightforward to complete?

  • Onboarding and logistics: Do participants receive clear, timely instructions? Do they know what to expect on the day of the session?

  • The session itself: Whether it's a usability test, diary study, or interview, is the experience well-organized and considerate of their effort?

  • Payout: Is the incentive delivered quickly, in a format they can actually use? Is the process simple?

Every stage is a potential drop-off point. Every friction point is a reason a participant doesn't come back — or tells others not to bother.

The participant experience scorecard

Tracking participant experience doesn't require a complex analytics setup. A small set of metrics, measured consistently, gives you a clear signal on whether things are improving or sliding.

Metric

What it measures

How to track

Show rate

Percent of recruited participants who attend scheduled sessions

Completion ratePercent of participants who finish the study once started Track at each stage to see where people are dropping out
Screener conversion rate
Percent of screener starts that result in a qualified, confirmed participantScreener completions divided by screener starts
Payout redemption rate
Percent of incentives redeemed within 7 daysLean on your incentive platform’s reporting
Payout support tickets
Volume of participant questions or issues related to paymentSupport or email volume
Repeat participation rate
Percent of participants who join more than one studyCRM or panel tracking
Participant satisfaction score
Direct rating from a post-session survey1-question post-session survey (1-5 scale or NPS)

Track these metrics over time, not just per study. A single low show rate might be a fluke. A downward trend across three studies points to a real problem.

Tip: Build verification into your process.

Nearly all UX researchers (88%) acknowledge participant fraud as a problem, and 78% believe fraud risk varies significantly by region. Your experience metrics and trust controls need to work together; strong completion rates mean little if fraudulent participants are inflating them.

The highest-impact ways to improve participant experience

Most participant experience problems trace back to a small number of fixable issues. Here’s how to tackle some common ones.

Improve recruitment targeting 

Fixes low qualify rates, disqualified-participant drop-off, and distorted funnel metrics

According to User Interviews’ State of User Research 2023, 70% of researchers report struggling to find enough participants who match their criteria. So before optimizing anything else, audit your screener-to-qualify rate. If it’s consistently low, the problem isn’t experience — it’s targeting. 

Tighten your invite audience ASAP. Targeting problems compound every downstream metric, and you can’t experience-optimize your way out of a bad audience match.

Write clearer invitations

Fixes low screener conversion and no-shows

Participants don't show up when they're confused or uncertain. But a clear invitation states the study topic in plain language, the time commitment up front, and the exact incentive amount.

A well-written invitation also lays the groundwork for informed consent. GOV.UK guidance on designing participant information sheets is a useful resource for what participants need to know.

Optimize your screener

Fixes unqualified participants and drop-offs

A screener that's too long or too leading frustrates qualified participants and trains them to game it. 

Keep screeners focused: ask only what's necessary to confirm eligibility. If a question doesn't directly determine fit, cut it. An analysis of more than 42,000 screeners found that each additional open-ended question meaningfully increases drop-off. Aim for fewer than 10 questions with no more than one or two open responses. 

AI can help you draft an initial question set, but review carefully for leading phrasing: LLMs have a tendency to write leading questions that undermine screener neutrality.

Send timely, specific reminders

Fixes no-shows and late arrivals

Remember that participants have full lives outside your study. That means a confirmation email alone isn't enough. 

Participants who feel the team is prepared and communicative are more likely to show up prepared. In appointment-based settings, automated reminders reduce no-show rates by up to 40%. So  in addition to a confirmation message, send reminders a week out, 24 hours before the session, and again 1 hour before. Include the session link, start time in the participant's local time zone, and a direct contact in case something comes up.

Systematize the session experience

Fixes disengaged responses and data quality

Participants should know what to expect, feel that their input is valued, and not be left waiting or confused mid-session. Consistency matters, too, especially across moderated usability tests or longitudinal studies. 

Brief every moderator before sessions begin and have a basic script to follow. Additionally, AI tools that handle real-time note-taking and synthesis let moderators focus on participants rather than the transcript.

Send incentives fast

Fixes payout friction and low repeat rates

Slower, less flexible incentives provide less value to participants. (Compare the perceived value of a $9.50 prepaid Visa card versus $13.37 for a mailed check.)

Participants notice when payouts are slow or complicated. Delayed rewards signal disorganization — or worse, that the team doesn't follow through. Aim to deliver incentives within 24 hours of session completion. For more on structuring your incentive program, see how UX researchers approach incentives.

Incentives are where participant experience is won or lost

Incentives are the most visible signal participants receive that their time was valued. They're also the part of participant experience that can create a lot of friction, especially for global studies.

When UX researchers evaluate incentive platforms, our data shows clear priorities:

PriorityPercent of UXRs who ranked it highly
Availability in multiple countries68%
Rewards in local currency62%
Speed and reliability56%
Security and compliance52%
Easy administration48%

The most commonly used reward types are retail gift cards (72%), prepaid Visa/Mastercard (64%), PayPal (28%), and bank transfers (22%).

Given the data, try these practical principles:

Match reward type to participant preference. Participants in some regions won't redeem a US-based retail gift card. Offer options or use a platform that automatically serves locally relevant rewards.

State the incentive amount clearly in the invitation. Ambiguity creates hesitation while specificity builds trust.

Automate payout where possible. Manual reward delivery creates delays and introduces errors. The fewer steps between session completion and reward delivery, the better.

How much research participants want to be paid

Get the report
illustration of a coin on a fishing hook
background shapes

Global participant experience pitfalls to avoid

Most (82% of) UX researchers send at least 6% of their incentives to participants outside the US. And 24% send more than half of all incentives internationally. Yet global incentive delivery remains one of the most error-prone parts of the research process.

Here are the most common friction points and how to address them:

Currency conversion (55% of global teams report this as a challenge)

Participants who receive rewards in a foreign currency often can't redeem them easily, or they lose value in conversion fees. To fix that, use a platform that pays out in local currency by default, rather than converting from USD.

Local reward availability (48%)

A $50 Amazon gift card means nothing in a market where Amazon doesn't operate. Before launching an international study, confirm that the reward options you're offering are actually available and desirable in each participant's country. 

If you're unsure, default to cash-equivalent options like prepaid cards or local e-wallets.

Tax and regulatory friction (42%)

In some jurisdictions, research incentives above a certain threshold trigger tax reporting requirements for participants. This creates a surprise administrative burden that sours the experience and could deter future participation. 

Work with your legal or finance team to understand reporting thresholds in key markets. If your platform handles tax form collection automatically, that's a significant advantage.

Limited delivery channels (34%)

Email-delivered rewards don't always work for participants who primarily access the internet via mobile. Meanwhile, bank transfers aren't available everywhere.

Offering multiple delivery options and asking participants how they prefer to receive their reward remove common last-mile failure points.

Participant experience checklist

Put best practices into, well, practice. Ensure participant experience is always on your team’s radar. Run through this checklist before every study to confirm you've set up a participant-friendly process from start to finish.

Before recruitment:

  • [  ] Invitation copy states the study topic, time commitment, and incentive amount clearly

  • [  ] Screener is scoped to eligibility-critical questions only

  • [  ] Reward type and amount are appropriate for the participant's region

  • [  ] Incentive delivery method is confirmed and tested

After scheduling:

  • [  ] Confirmation sent immediately after scheduling

  • [  ] Reminder scheduled for 24 hours before the session

  • [  ] Reminder scheduled for 1 hour before the session

  • [  ] Session logistics (link, time zone, and contact info) included in every communication

During the study:

  • [  ] Participants receive a clear brief before the session begins

  • [  ] Moderators are aligned on structure and timing

  • [  ] Technical setup is tested before participants join

Post-session:

  • [  ] Incentive delivered within 24 hours

  • [  ] Post-session satisfaction survey sent (optional but recommended)

  • [  ] Participant data logged for repeat participation tracking

Key takeaways

Participant experience shapes the quality of your research and your ability to run studies at scale over time. Treat it as a discipline, not an afterthought, for more reliable data, better show rates, and participant pools that grow rather than erode.

Here's what to take away:

  • The core challenges UXRs face (recruitment quality, data reliability, and speed) all connect to how participants are treated throughout the study.

  • A short scorecard gives you a reliable signal on whether experience is improving. Track metrics over time to see patterns and alter course.

  • The highest-impact fixes are often simple: clearer invitations, tighter screeners, faster payouts, and timely reminders.

  • Incentives are the most visible signal of how much you value participants' time, and faster, flexible rewards deliver the most value.

  • Global teams face specific friction around currency, local reward availability, and tax compliance. Address these proactively for each market you operate in

How to recruit participants for UX research studies in 10 steps

Read the article
background shapes