L&E Research
Participate
Client login
Bid request
  • Focus areas
    • L&E Health
    • L&E Consumer
    • L&E Legal
  • Facilities
    • All Facilities
    • Sensory Facilities
  • Capabilities
    • Participant Recruitment
    • Full-Service Research
    • Research Integrity
    • Research Infrastructure
  • Resources
  • Meet us
    • Meet us
    • Careers
  • Contact
  • Focus areas
    • L&E Health
    • L&E Consumer
    • L&E Legal
  • Facilities
    • All Facilities
    • Sensory Facilities
  • Capabilities
    • Participant Recruitment
    • Full-Service Research
    • Research Integrity
    • Research Infrastructure
  • Resources
  • Meet us
    • Meet us
    • Careers
  • Contact

Panel Size Is a Vanity Metric. Quality Is a Different Story.

DWG Admin on March 11, 2026

Small diverse group engaged in a professional discussion around a table

A research team needed 30 consumers for a two-day product evaluation. The specs were specific but not unusual: primary grocery shoppers, aged 30 to 55, with children in the household, no prior participation in a food study within the past six months. The recruiting partner confirmed all 30 within 48 hours.

By the second day, the moderator flagged a problem. Several participants gave responses that felt rehearsed. Two couldn’t recall the product category they claimed to purchase regularly. One admitted, during a break, that she’d also participated in a food study the previous month through a different panel.

The project wasn’t salvageable. Not because the screener was flawed or the methodology was weak, but because the panel behind it couldn’t deliver what it promised. The team spent weeks designing a study that would produce actionable product development insights. They lost that investment not to a methodological error, but to a recruiting infrastructure that prioritized filling seats over filling them well.

Scenarios like this one are more common than most teams realize. They rarely make it into industry reports because the symptoms are ambiguous: inconclusive findings get attributed to weak discussion guides, low-energy groups get blamed on moderator style, and inconsistent data gets written off as natural variation. The root cause, panel quality, goes unexamined because it’s invisible to everyone except the recruiting partner.

The Metric That Gets All the Attention

Panel size is the first number most research buyers see when evaluating a recruiting partner. It’s in the pitch deck, the capability statement, the website headline. And it’s not irrelevant. A larger panel does improve the probability of finding niche audiences, reaching specific geographies, and filling studies on tight timelines.

But size alone reveals very little about whether those participants will show up prepared, engaged, and honest. It says nothing about how they were recruited, how recently their profiles were verified, whether they’ve been over-researched, or how the panel provider manages the inevitable churn that every community experiences over time. Size is the easiest thing to measure about a panel. It is also the least predictive of research quality.

The industry has recognized this at a conceptual level. Data quality has been a headline topic at major conferences for several years running, and most insights professionals can articulate why it matters. The gap is in how that awareness translates into partner evaluation. Too often, the conversation about panel quality ends at the RFP stage, with a checkbox for panel size and a vague question about fraud prevention.

What Actually Determines Panel Quality

A research panel is not a list. It’s a managed system, and the quality of that system depends on what happens at every stage of the participant lifecycle: how people enter, how they’re maintained, and how problems are identified before they reach a study.

Recruitment with intention. The distinction between a high-quality panel and a convenience sample often starts at the point of recruitment. Panels built through broad digital advertising or incentive-driven sign-up flows tend to attract participants who are motivated by compensation rather than genuine interest in sharing their perspectives. That’s not inherently disqualifying, but it creates a profile skew that compounds over time. Panels built through community engagement, referral networks, and diversified outreach tend to produce participants who are more representative, more engaged, and more likely to provide thoughtful responses.

Identity verification that goes beyond self-report. Asking someone to confirm their own demographics during sign-up is a starting point, not a safeguard. Effective identity verification layers multiple checks: cross-referencing profile data against third-party databases, using digital fingerprinting to flag duplicate accounts, and implementing re-verification at regular intervals rather than relying on a single intake screen. In an era where synthetic identities and professional survey-takers are increasingly sophisticated, verification needs to be continuous, not one-time.

Active panel management. A panel that isn’t actively maintained degrades. Profiles go stale as life circumstances change. Participants who were engaged two years ago may now be disengaged, over-researched, or simply unreachable. Active management means regularly updating participant profiles, monitoring engagement health, enforcing participation frequency limits, and retiring members who no longer meet quality standards. It’s the operational work that doesn’t show up in a capability statement but determines whether the panel delivers when it matters.

Quality systems, not just quality claims. Most recruiting partners will tell you they prioritize data quality. The question is whether that commitment is structural or aspirational. ISO certifications (27001 for information security, 20252 for market research) provide an external verification layer. They don’t guarantee perfection, but they do confirm that documented processes exist, that those processes are audited, and that the organization has invested in the infrastructure required to maintain them. In an industry where “quality” is claimed by everyone, third-party validation carries weight.

A Structural Tension Worth Naming

One factor that quietly contributes to panel quality challenges is the blurring of boundaries between quantitative and qualitative panels. As demand for qualitative research has grown, some organizations have turned to quantitative panel sources to fill qualitative studies. The logic is understandable: the pool is larger, the cost per recruit is lower, and the timeline is faster.

The tradeoff is real, though. Participants recruited and managed for survey completion behave differently than participants recruited and managed for conversation-based research. The skills are different. The engagement expectations are different. The screening rigor required is different. A participant who excels at completing a 15-minute online survey may not be equipped to contribute meaningfully to a 90-minute focus group about product experience.

This isn’t a criticism of quantitative panels. They serve an essential function. But when qualitative studies are staffed from quantitative sources without adjusting for those differences, the result is often the kind of quality issue that surfaces mid-project: flat responses, inconsistent recall, and participants who feel like they’re completing a task rather than sharing a perspective.

Asking Better Questions

The insights professionals who get the most reliable participant quality tend to ask their recruiting partners a different set of questions than what appears on a standard RFP. They ask about recruitment sources and how those sources are diversified. They ask about the frequency with which participant profiles are updated and how long inactive members remain in the system. They ask about participation limits and how those are enforced. They ask what happens when a participant fails a quality check during a study, and whether that information feeds back into the panel management process.

These aren’t gotcha questions. They’re the kind of operational inquiry that distinguishes a team evaluating infrastructure from a team evaluating a brochure. And the answers reveal a great deal about whether a recruiting partner treats panel quality as a core capability or a marketing message.

There’s a pattern worth noting here. Teams that ask these questions early in the relationship tend to experience fewer quality issues throughout the project lifecycle. The questions themselves signal to the recruiting partner that quality will be monitored, not assumed. That accountability, established at the outset, shapes how the partner prioritizes your study relative to the dozens of others they may be running simultaneously.

The Infrastructure Behind the Insight

At L&E Research, we think about panel quality as an infrastructure problem because that’s what it is. Our panel of more than 1.6 million U.S. participants is recruited with intention, verified through layered identity checks, and actively managed to ensure that profiles stay current and engagement stays genuine. We maintain ISO 27001 and ISO 20252 certifications because we believe quality systems should be audited, not just asserted.

None of that is visible from a capability statement. It’s visible in the quality of the participants who show up to your study, prepared and engaged, ready to share something real.

That’s the different story. And it’s the one worth paying attention to.

Participant Retention Strategies for Small Research Teams

Chris on January 15, 2026

Participant retention breaks down in predictable ways for small research teams.

Infrastructure gaps cause the problem. Small teams juggle multiple clients, tight timelines, and scrappy budgets while managing methodology, client communication, and analysis. Research participant recruitment becomes one more task competing for limited capacity.

The consequences show up later. Participants confirm, then disappear. No-shows force replacements. Drop-off between recruit and session creates stress, delays, and budget leakage that small teams can’t absorb without consequences.

Understanding where retention breaks down showcases why infrastructure matters more than effort.

Why Research Participant Recruitment Falls Apart for Small Agencies

Retention collapses at three predictable points, and small teams lack the capacity to prevent any of them.

1.) No Dedicated Engagement Teams

Independent consultants and small agencies recruit while managing everything else. Confirmation calls happen when there’s time, not systematically. Participant questions go unanswered for hours or days. The relationship stays transactional because there’s no capacity to make it anything else.

2.) Inconsistent Follow-up 

A reminder email goes out the day before a session if someone remembers. Follow-up protocols exist in theory but not in practice. Participants who have questions or schedule conflicts slip through because there’s no systematic check-in process.

3.) No Backup Infrastructure

When someone drops, there’s no replacement pipeline ready. Recruiting starts over from scratch. The study waits. The client waits. The consultant absorbs the stress and the timeline risk.

Higher no-show rates and more scrambling create a credibility problem with clients who expect reliable execution.

This reflects capacity constraints, not lack of competence. The real risk is what happens when retention problems aren’t solved: projects that looked profitable on paper start bleeding margin.

The Hidden Cost of DIY Recruitment

Many small agencies recruit participants themselves or use river sample sources because it feels faster and cheaper upfront.

What actually happens tells a different story.

Drop-off occurs between recruit and session. Someone confirms participation, then stops responding. By the time the agency realizes the participant won’t show, it’s too late to recruit a quality replacement.

No-shows force last-minute replacements that compromise sample quality. The replacement participant gets less vetting, less context, and less preparation time. The study moves forward, but the data quality takes a hit.

Recontacts don’t convert. Follow-up studies depend on bringing participants back, but participants who had a poor first experience don’t return. The consultant has to start recruitment over.

Rework eats time, margin, and client trust. Projects run late. Budgets stretch. The client questions whether the agency can deliver reliably. For independent consultants, that credibility hit affects future project opportunities.

River sample providers deliver volume without engagement infrastructure. Participants show up less reliably when there’s no relationship, systematic follow-up, or accountability built in. The provider hands off contact information. What happens after that is someone else’s problem.

The pattern repeats until someone changes the approach. Specialized recruitment partners exist specifically to solve what small teams can’t fix alone.

Why Teams Hesitate to Change Recruitment Approaches

Three objections come up repeatedly when small agencies consider working with specialized recruitment partners.

First, “River sample is cheaper.” It looks cheaper upfront. Drop-off, replacements, and rework add cost quickly, especially when your time and margin are on the line. The initial savings disappear when projects require emergency fixes.

Second, “High incidence is easy, I don’t need a recruiter.” Finding people is easy. Confirming them, engaging them, and delivering reliable attendance and thoughtful participation is harder. That’s where projects break down.

Third, “I just need bodies, not perfection.” Even basic studies suffer when attendance is unstable and participants are disengaged. The fix is almost always more expensive than preventing the problem upfront.

These concerns are valid. The question is whether the current approach is actually saving time, money, or stress. For most small agencies, the answer is no. The hidden costs accumulate until something has to change.

What Specialized Recruitment Partners Do Differently

Effective research participant recruitment requires infrastructure and systems that protect outcomes at scale.

Rigorous pre-screening goes beyond demographic checks:

  • Eligibility gets validated through multiple confirmation points
  • Motivation gets assessed to identify participants who will follow through
  • Red flags get caught early, before someone enters the recruitment pipeline

Systematic follow-up replaces ad hoc reminders:

  • Confirmation happens at specific intervals, not when someone remembers
  • Participants receive clear expectations upfront about session length, compensation, and logistics
  • Questions get answered immediately
  • Problems get caught before they escalate

Engagement protocols reduce drop-off:

  • Participants understand what to expect because communication is consistent and detailed
  • The relationship starts before the session, which builds accountability
  • Proactive problem-solving catches timezone confusion, tech issues, and schedule conflicts early

Backup recruitment infrastructure runs in parallel:

  • Alternates get recruited alongside primary participants
  • Replacement pathways are ready before sessions begin
  • When someone drops, the backup is already screened, engaged, and prepared

Quality control checkpoints happen before sessions start:

  • Validation occurs during recruitment, not after problems surface
  • Fraudulent participants, professional survey takers, and disengaged respondents get identified and removed while there’s still time to replace them

This infrastructure prevents the chaos small teams experience when retention breaks down. The difference is having a partner who owns the outcome through the entire process. That’s where L&E’s approach differs from transaction-based providers.

How L&E Supports Small Agencies

L&E’s research participant recruitment approach treats high incidence audiences with the same care as low incidence populations.

The Highly Engaged Panelists system is built on participant relationships and long-term engagement. Panel management focuses on engagement quality and meaningful participation. Participants stay involved through genuine partnership in the research process.

L&E owns the outcome. Attendance, participation quality, and follow-through are L&E’s responsibility, not something handed off for the agency to manage. When problems surface, L&E addresses them. When backup participants are needed, L&E has them ready.

The work happens at competitive rates with infrastructure already built. Small agencies get specialized recruitment without paying premium pricing.

The impact for small agencies is concrete.

Reliable attendance reduces no-shows and stress. Projects stay on schedule with participants who follow through.

Fewer emergency replacements reduce firefighting. Replacements are already screened and prepared when needed.

Better participation quality delivers more thoughtful responses. Engaged participants contribute stronger data when they understand why their input matters.

Successful recontacts eliminate starting from zero on follow-up studies. Participants with positive first experiences return for phase two.

Less rework protects both margin and reputation. Projects deliver on time without budget overruns or credibility hits with clients.

These outcomes matter because small agencies operate on thin margins. When retention is predictable, profitability stays intact.

What You Gain: Stability Without Scaling Your Team

Small agencies don’t need to hire dedicated recruiters or build engagement systems internally.

Specialized research participant recruitment partners handle screening, engagement, follow-up, and backup planning. The agency focuses on methodology and client relationships. The recruitment partner ensures participants show up prepared and engaged.

Projects feel predictable instead of stressful. Attendance holds. Clients trust the process. The consultant’s credibility stays intact.

Retention problems need systems that catch issues before they escalate.

Ready to Make Retention Predictable?

If participant retention is creating stress, rework, or budget leakage, L&E can take it off your plate. We bring the same rigor to high incidence recruitment that we apply to low incidence work.

Screening, engagement, follow-up, and backup recruitment are embedded into every project. You get reliable attendance and quality participation without building systems yourself.

Contact L&E Research to discuss your next project. Let’s make participant retention predictable instead of stressful.

What Rushed Research Is Getting Wrong

Chris on January 12, 2026

Speed sells in research. 

Clients want insights before the quarter closes, before the product ships, before the campaign runs. The pressure is real, and the promise of “real-time” research sounds like a competitive advantage.

Fast research often fails in ways teams don’t see until decisions have already been made. Sample quality drops. Validation gets skipped. Data collection methods get compromised when timelines compress.

The research looks complete, but it can’t support the decisions it’s meant to inform.

Where Fast Research Breaks Down

Three things go wrong when research gets rushed.

First, feasibility checks disappear.

A client needs 300 participants in two weeks for a complex segmentation study. The project launches immediately because no one wants to say no. Two days in, recruitment is struggling. The timeline was never realistic, but momentum has already committed everyone to a setup designed to fail.

Second, sample standards relax.

Recruitment shifts from “find qualified participants” to “fill the quota.” A study needs parents of children with specific health conditions. After a week of slow recruiting, the criteria quietly expand to parents of children who might have those conditions. The phrase “close enough” starts appearing in project updates. Teams tell themselves they’ll fix the sample later. They never do.

Third, validation happens too late.

Fraudulent responses, bot activity, and professional survey takers make it into the dataset because there’s no time to catch them during fieldwork. By the time analysis begins, the only option is damage control. Removing bad responses shrinks the sample size below what the study design required. The findings are presented anyway because there’s no time to recruit replacements.

These failures happen predictably because they’re built into how fast research gets executed. When feasibility isn’t evaluated upfront, when research sampling methods don’t include clear criteria, and when validation checkpoints aren’t built into the process, speed amplifies the problems instead of solving them.

Fixing them requires changing where quality controls get embedded.

Why L&E Data Collection Methods Start with Feasibility

When speed becomes the priority, planning discipline becomes the differentiator.

L&E approaches fast-turn projects as design challenges, not execution shortcuts.

Before any fast-turn project launches, three questions get answered. Is this research methodology realistic for the timeline? What could derail this project, and how do we prevent it? Will the data we can deliver in this timeframe actually support the decision the client needs to make?

A brand team once needed qualitative research on a low-incidence health condition with a two-week turnaround. Ethnographic observation wasn’t realistic. Neither was recruiting 50 participants who met narrow health criteria.

L&E recommended a smaller sample with in-depth interviews instead, paired with photo diaries to capture context. The methodology fit the timeline. The client got actionable insights. The alternative would have been launching a study that couldn’t deliver.

Feasibility assessment also catches mid-project scope changes before they break the timeline. When a client requests additional participant criteria halfway through recruitment, that change gets reviewed and re-validated. If adding the criteria makes the recruitment timeline unrealistic, we say so.

The team doesn’t quietly absorb new requirements and hope recruiting speeds up. Projects move quickly because the plan accounts for what’s actually achievable, not what sounds good in a kickoff meeting. Feasibility protects the timeline. The next layer protects the data itself.

Sample Quality Doesn’t Negotiate

Smart data collection methods maintain participant standards even when timelines compress.

If a study requires participants who are physically active but show physiological markers of sedentary lifestyles, those criteria don’t become flexible because recruiting is slower than expected. Participant selection standards introduce bias that no statistical adjustment fixes later.

Sample integrity either holds or it doesn’t.

L&E keeps recruitment sources fully traceable. Every participant can be tracked back to how they were sourced, when they were screened, and which qualification criteria they met. When a fraud pattern appears in the data, traceability allows the team to identify where it entered the process and address it systematically. Without traceability, teams guess.

Participant validation frameworks also operate during recruitment, not after. Identity verification catches duplicate participants before they enter a study. Engagement monitoring flags participants who rush through screeners or provide contradictory answers. Bot detection identifies pattern responses in real time.

Problems get addressed while there’s still time to recruit replacements.

The industry habit is to validate after data collection, when the only option is removing bad responses and hoping the remaining sample is adequate. That’s damage control, not quality assurance. L&E validates during fieldwork so the final dataset doesn’t require explanations about what had to be thrown out.

Real-Time Validation Using Strong Data Collection Methods

The gap between industry practice and effective validation reveals a larger issue: validation timing isn’t a workflow detail, it’s a data integrity decision.

Effective data collection methods build validation checkpoints into the process, not after it. L&E validates participant qualifications independently of the recruiter who sourced them, which means the person who recruited someone isn’t the same person who confirms they’re eligible. 

The moderator running a session isn’t the only person reviewing response quality either. Independence prevents confirmation bias from allowing questionable participants through.

Fraudulent participants, professional survey takers, and bots get caught during fieldwork because validation happens in parallel with recruitment. When problems surface, replacement recruitment begins immediately. The final dataset doesn’t include participants who were flagged as problematic but left in because it was too late to fix.

Data quality gets protected through structured checks, not statistical adjustments.

Transparency About What Speed Changes

Research teams need to know what compressed timelines actually mean for their data.

L&E labels interim findings clearly. If results are based on 60% of the planned sample because fieldwork is still ongoing, that’s stated upfront. If validation windows were shorter than standard, the report notes which checks were completed and which were abbreviated.

Clients get findings and limitations together, not findings first and disclaimers buried in footnotes.

Reporting also avoids overconfidence when speed reduces sample size or validation time. If a study was designed for 300 participants but only 200 were recruited in the available timeline, confidence intervals reflect the actual sample, not the intended one. Conclusions distinguish between what the data shows and what it suggests.

Recommendations acknowledge when additional validation would strengthen the findings.

This matters because research informs decisions with real consequences. Marketing budgets get allocated. Product features get prioritized. Strategic direction gets set. When that research was conducted under time pressure, decision-makers needed to know what constraints existed.

The alternative is presenting fast research with the same confidence as fully validated studies. That creates false certainty. Executives make decisions believing the evidence is stronger than it is. When those decisions don’t work out, trust in research erodes. Speed stops being an advantage when teams stop believing the insights.

Transparency about limitations builds trust and protects the foundation for future work. Feasibility, sample quality, validation, and transparency each depend on operational systems that support them.

Why L&E Can Move Fast Without Compromising Quality

L&E’s systems are built for speed from the ground up.

Standardized templates, checklists, and workflows mean teams don’t reinvent processes on every project. When a fast-turn study launches, protocols are already in place, and instead of improvizing, they’re following tested procedures across hundreds of projects.

Standardization allows speed without sacrificing consistency, and training removes judgment calls from routine decisions. When a recruiter questions whether a participant qualifies, the answer comes from documented criteria, not personal interpretation. When an analyst flags a suspicious response pattern, the escalation process is clear.

Decisions follow process, not instinct.

Every project has a named owner accountable for timelines, data integrity, and communication. That accountability intensifies under tight deadlines. Issues trigger documented escalation and corrective action. Problems get addressed through structured review, not ad hoc fixes.

L&E’s operational approach is backed by ISO 20252 certification, which establishes standards for research quality management. The certification serves as both a competitive differentiator and the structural foundation that enables us to maintain rigor even when timelines compress.

Speed works because the systems supporting it were embedded long before the deadline arrived.

When Systems Come First Speed Works

Fast research fails when teams treat it as an execution challenge. Recruiters are told to move faster. Moderators compress discussion guides. Analysts get less review time.

The problem? Execution can’t fix what design broke.

Fast research succeeds when quality gets built into the framework upfront. Feasibility is confirmed before work begins. Sample criteria stay defined. Validation happens during recruitment. The research moves quickly because the plan accounts for what’s realistic, not what’s optimistic.

When speed becomes a liability, process becomes the differentiator.

Our approach is straightforward. Speed requests get more scrutiny, not less. Timelines get validated against what’s actually achievable. Sample standards don’t negotiate. Validation happens in real time. Transparency about limitations protects trust.

Ready to Move Fast Without Compromise?

If your team needs research that delivers reliable insights on tight deadlines, our systems are already built to support it. Feasibility assessments, sample quality standards, and real-time validation are embedded into every fast-turn project.

If you’re ready for change, it’s time to contact L&E Research today.

Beyond the Pattern: Insights Only Real People Can Reveal

Chris on December 8, 2025

Artificial intelligence is reshaping the insights industry at an extraordinary pace. AI qualitative research has introduced new tools, new models, and new workflows that continue to push the boundaries of what is possible, tempting researchers with faster timelines and cleaner datasets. Yet beneath that excitement sits an important question: where does synthetic data strengthen the work, and where does it still fall short?

To explore that question, L&E Research conducted two complementary studies. The first took place in the spring and centered on bathroom habits, an intentionally human, messy topic that revealed notable gaps in how synthetic respondents interpret personal behaviors and contextual cues. The second study, completed this fall, examined breakfast habits using a more advanced methodology built inside CondUX. This allowed us to test how synthetic respondents handled logic-heavy branching, object detection, and emotionally driven open-ended tasks, all in parallel with a panel of sixty real people.

Together, these studies gave us a grounded, evidence-based view of what synthetic data can do well and where human insight still matters. The findings are neither alarmist nor celebratory. They are practical and measured, shaped by what happened when we put two respondent types through the same workflows. The results offer a clear direction for researchers who want to use synthetic respondents responsibly and effectively.

What Synthetic Data Did Well

Across both studies, synthetic respondents performed strongly when the task relied on structure, logic, and clear informational cues. They handled formatted questions accurately. They moved through complex branching without friction. They delivered consistent reasoning, tidy patterns, and clean distributions. For general trends or macro-level norms, synthetic outputs often aligned with the human panel. For example, all groups recognized comfort and scent as primary drivers of bathroom product choices, and nearly all showed similar attitudes toward basic hygiene habits and price sensitivity.

In the breakfast study, synthetics performed equally well when the question centered on observable behavior. When asked to estimate the percentage of people who skip breakfast, they offered clean, narrow ranges grounded in public data. When asked about the types of foods people tend to eat in the morning, their answers mirrored common consumer patterns. In these cases, the synthetic panel provided quick, directional insights that were easy to analyze.

The engineering work invested in building L&E’s synthetic panel also made a meaningful difference. The persona engine introduced individuality and response variability. The use of model combinations created a balance between speed and quality. Deterministic logic ensured that the branching path followed expected rules. These choices produced synthetic respondents that were more mature, more consistent, and more capable than in our earlier testing. They were still pattern-driven, but they were better at producing human-like variation within those boundaries.

All of this reinforces that synthetic respondents are valuable for certain uses. They can support early exploration, provide fast pulses before engaging real participants, and act as an efficient test bed for survey logic. They can also reveal formatting issues, help identify biases in question structure, and produce clean bulk data when speed is the highest priority. In these areas, the strengths of synthetic panels can save time, reduce cost, and support stronger research design.

Where Synthetic Data Fell Short

Although the synthetic panel handled structure well, it struggled consistently with emotion, contradiction, and personal context. When asked to share memories or describe feelings, synthetics provided warm vocabulary without lived grounding. Their emotional tone had polish, but not depth. In the smell memory task, synthetics created vivid scenes instead of authentic recollections. They described cinnamon and orange peels simmering on a stove, but never linked those images to real people, experiences, or moments.

This pattern repeated across open-ended responses. Humans spoke about parents, childhood routines, comfort, stress, and identity. They recalled mornings that had gone wrong, the rituals that held meaning, and the ways breakfast shaped their day. Synthetic respondents spoke in structured generalities. They offered interpretations that were plausible, but not personal. They followed linear reasoning, rarely contradicted themselves, and rarely displayed the spontaneity or unpredictability that characterizes real human behavior.

Visual outputs revealed similar limitations. In the bathroom study, synthetic images looked pristine and over-engineered, with little sense of real-world imperfection. In the breakfast study, freezer images generated by synthetics fell apart even as more effort was applied to improve variability. Increased engineering did not consistently yield more realistic results. In fact, the harder the system was pushed to create natural randomness, the more artificial the images became.

The most striking limitation surfaced when the AI was asked to classify the two datasets. It reviewed the human panel and synthetic panel and confidently asserted that panel A was human and panel B was synthetic. It was wrong. When asked why it misinterpreted the data, the model explained that it made its determination based on pattern recognition. It acknowledged that one dataset looked more structured and normalized, so it assumed that one was synthetic. The realization was important. Confidence did not mean accuracy, and pattern-driven thinking did not equate to human understanding.

This moment made something clear. Even when synthetic respondents look convincingly human, especially at first pass, the source of their output is fundamentally different. They respond to patterns. They do not respond to experience. Synthetic data is a powerful tool, but it does not replace the grounding that comes from real human insight.

Why Methodology Matters in AI Qualitative Research

The design decisions in the breakfast study highlighted the importance of building research that is intuitive for people. By leaning into image uploads, video responses, and object detection, the study created moments that felt natural and familiar. Participants engaged with the questions the way they would engage in everyday routines. This allowed us to observe how synthetics navigated the same experiences.

Regions of difference appeared clearly. Humans offered wide ranges of interpretation because lived experience varies. Synthetic respondents stayed within narrow, predictable bands. When object detection revealed inconsistent behavior, humans explained it through life context. Synthetics explained it through clean logic. When mornings went wrong, humans shared stress, panic, humor, and self-reflection. Synthetics shared sequences. The structure of the study amplified the contrast between the two respondent groups, and it made the findings easier to interpret.

The quality of the methodology also demonstrated how a platform like CondUX can elevate insight. Designed for people first, the study flow became smoother and more intuitive. The same design principles can improve synthetic processing by clarifying intent and reducing ambiguity. This dual benefit creates an environment where human-centered design, strong logic, and modern tools support both types of respondents.

A Practical Future for Synthetic Data

The future of synthetic data in AI qualitative research is not a matter of replacement. It is a matter of fit. There are places where synthetic can strengthen the work and places where it cannot. The responsible path is to use it with intention, knowing when it provides value and when it introduces risk.

Synthetic data is useful for the early stages of research. It can help test surveys, explore broad ideas, compare multiple concepts, and simulate missing segments. It is efficient for bulk analysis and can generate large sets of open-ended comments when the goal is volume rather than nuance. It is a valuable tool for reducing cost and saving time, particularly in early exploration or when the task does not rely on emotional depth.

Human participants remain essential for the parts of research that require meaning. Emotion, trust, comfort, cultural context, loyalty, fear, hesitation, and memory are not yet areas synthetic panels can replicate. Humans tell stories. They contradict themselves. They surprise us. They make decisions that do not always follow logic. All of this matters when the goal is to understand why people behave the way they do.

The findings from these two studies reinforce a simple conclusion. The future of insights is hybrid. Synthetic provides speed and structure. Humans provide depth and truth. Together, they can help researchers balance quality with efficiency.

The Takeaway for AI Qualitative Research

Synthetic respondents have come a long way in a short time. They offer significant advantages in speed and consistency, and they can support researchers in promising ways. At the same time, they cannot yet replicate the complexity, unpredictability, or emotional richness of human behavior. When the study is simple, those gaps are visible. When the study becomes complex, they grow.

What our research showed is not a verdict about technology. It is a reminder that tools and people serve different roles. Synthetic is a supportive partner, not a substitute. Used wisely, it can strengthen the research process. But the heart of qualitative work still comes from people. Their contradictions, context, and lived experiences shape insight in ways no model can fully imitate.

If synthetic struggles with simple tasks, it will not be ready for the complex ones. That is not a criticism: it is a direction for where we go next. The work ahead will continue to blend strong methodology, thoughtful design, and human understanding with the power of modern AI. That is how we will keep learning, keep evaluating, and keep improving the tools that will shape the next era of insights.

The Proxy: The Magic Bullet of Healthcare Market Research

Chris on November 17, 2025

Healthcare innovation continues to advance beyond common conditions. The focus now includes developing new treatment options and improving quality of life for individuals diagnosed with rare diseases.

This shift highlights the importance of accessing insights from these unique populations, their caregivers, and healthcare providers. One of the most challenging aspects of market research in the healthcare sector is recruiting participants from the rare disease cohort. To meet required quotas, researchers often need to introduce flexibility within audience requirements, leading to the inclusion of proxy participants.

When carefully selected and well-defined, proxy participants can contribute to the success of a study. However, poorly chosen or inadequately defined surrogate participants can undermine the legitimacy of the insights and compromise the integrity of the data. To navigate these challenges successfully, it’s essential to understand how rare diseases impact the research process.

Rare Disease Impact on Research

While rare conditions pave the way for medical advancements through research, the more niche a condition, the less accessible those diagnosed become for participation in market research, particularly for studies requiring in-person involvement.

A rare condition is defined as one that affects fewer than 200,000 individuals in the United States and fewer than 1 in 2,000 people worldwide. Although each condition is uncommon on its own, more than 7,000 rare diseases collectively affect over 30 million people across the U.S. This limited patient recruitment pool makes traditional recruitment strategies impractical for rare disease studies.

For in-person qualitative research, incorporate proxy patients into the recruitment approach from the outset, provided the research is not a clinical trial and does not involve treatments or medication administration.

By pre-defining acceptable surrogate patients as a flexible component of the recruitment strategy, researchers can ensure a predictable pace, schedule, and budget. This minimizes the need for additional approvals and unexpected accommodations during the research process.

The Power of the Proxy

When evaluating the impact of including representative patients in a rare condition study, there are four key elements to consider:

Prevalence

Pre-identifying and approving surrogate populations to represent rare cohorts can elevate a project’s incidence rate (IR). It increases the likelihood that the project will meet its defined quota(s) within the specified timeframe and budget.

Accessibility

Reasonably budgeted in-person research requires clusters of patients in defined central locations. For rare conditions, clustering is challenging, if not impossible, without requiring patient travel.

Due to the underlying diagnosis qualifying them for the research, patient travel is often unrealistic.

Budget

Rare condition in-person research is costly. Without the inclusion of surrogate patients, researchers face two options: research team travel or patient travel.

Funding for rare disease research is often limited due to the perceived low return on investment for therapies prescribed to a small patient population, coupled with the high cost of drug development.

Stamina

The most researched rare conditions are often in the specialties of oncology, neurology, and pulmonology.

These diagnoses are frequently accompanied by decreased stamina due to disruptions in energy production, muscle function, and nerve transmission. Patients may decline market research study opportunities, especially those that are longer or multi-step, to preserve and protect their energy and health.

Defining Your Proxy

These four elements establish why proxy participants matter. The next step is defining who qualifies as an acceptable proxy for your specific study.

Evaluate and prioritize the negotiable versus non-negotiable qualifiers related to your study participants. Whether focusing on patient journey, preference research, or device/software usability, consider what can be replicated by a population with higher prevalence but similar pathways in terms of symptomatology, provider pool, and treatment experience. Strategic patient recruitment planning at this stage prevents delays and budget overruns later.

From there, acceptable surrogate patients can be defined.

Examples of successful patient proxies in qualitative market research:

  • Supplementing Tardive Dyskinesia patients with those diagnosed with Essential Tremor.
  • Recruiting stroke-induced aphasia patients to meet quotas for Landau-Kleffner Syndrome.
  • Utilizing cirrhosis patients with a diagnosis of Chronic Obstructive Pulmonary Disease (COPD) to recruit for Alpha-1 Antitrypsin Deficiency studies.

Patient Recruitment Considerations & Recommendations

Once proxy participants are defined, execution becomes critical. The following recommendations help ensure successful rare disease patient recruitment:

  • Proof of Diagnosis: When recruiting rare and representative patients, require participants to provide proof of diagnosis, typically in the form of an EMR sent via secure transfer to the recruitment partner.
  • Decentralize: Whenever possible, conduct the research remotely. This approach increases the pool of rare patient participants available while keeping costs low.

For small device testing, consider shipping the prototype to the patient with prepaid return arrangements.

  • Simplify: Availability and stamina are prevalent concerns.

Keep the research simple by eliminating long engagements, multi-step, or multi-day study methodologies.

  • Transportation: Make transportation easy. Depending on the required diagnoses and/or the age of the participants, consider arranging transportation or providing an extra incentive for a family member or friend to assist with transportation.
  • Over-Recruit: The recommended over-recruitment rate for remote rare patient research is 20 percent, increasing to 25 percent for in-person research. Cancellation rates are higher among the chronically ill due to the unpredictability of their health on the study date and limited scheduling options.

To ensure quotas are met within the designated timeline, a 25 percent over-recruitment is recommended.

Your Recruitment Partner

The right partner makes these best practices work.

A successful rare patient recruitment effort depends on the quality of your recruiting partner. Look for vendors with clinical expertise who can help design a workable recruitment strategy.

Partners with an established presence in the industry, affiliations with rare patient groups, and a proven track record in rare patient research deliver the best outcomes.

Proxy participants in rare disease market research offer a powerful strategy for overcoming the inherent challenges of recruiting these unique populations. Careful selection of surrogate patients, decentralized research efforts, and simplified study designs enable researchers to gather meaningful insights that drive healthcare innovation.

Quality and reliability of data improve with these practices, supporting effective treatments and improved quality of life for individuals affected by rare diseases. As medical research continues to push boundaries, thoughtful and strategic recruitment approaches remain essential in advancing our understanding and addressing the needs of these special populations.

Ready to Get Started? 

If you’re planning a rare disease study, our healthcare research team can help you recruit the right participants efficiently and reliably. 

With extensive experience in rare disease and specialty populations, we design studies that meet quotas, stay on schedule, and produce high-quality, actionable insights. 

Contact us today to discuss how we can support your research.

Qual vs. Bot: A Study So Real, It’s Artificial

DWG Admin on October 23, 2025

The research world is buzzing about synthetic respondents, but the question remains: can AI-driven panels deliver the same nuance, insight, and emotional depth as real people? As synthetic panel technology matures, researchers are grappling with when – and if – it makes sense to replace human participation with machine-generated responses.

Join L&E Research as we unveil the results of a brand-new case study designed to put synthetic respondents to the test. In this session, we’ll compare real and AI-generated participants across several research tasks, revealing surprising insights about where synthetic data delivers, where it doesn’t, and what that means for the future of research. Along the way, we’ll highlight a few innovative platform features that made this experiment possible.

This isn’t just a theoretical discussion. We’ve built synthetic panels using retrieval-augmented generation (RAG) models and compared them to real participants recruited via Condux’s self-serve capabilities. The result? A compelling, unbiased look at when synthetic works, when it fails, and how researchers can smartly deploy it.

Whether you’re skeptical, curious, or already testing AI in your research stack, this session will help you understand what’s hype – and what’s real.

During this webinar, we’ll explore:

  • What we tested: An overview of the research design, including how we structured parallel studies with synthetic and real respondents.
  • How responses differed: Key findings on where synthetic participants aligned with, or diverged from, human data.
  • Methodological implications: What our results suggest about the strengths and limitations of using AI-generated respondents in various research scenarios.
  • Workflow considerations: A look at how survey logic, branching, and object detection influenced participant experience and outcomes.
  • Practical takeaways: Where synthetic inputs can realistically support qualitative and quantitative goals, and where caution is still warranted.

From Race to the Bottom to Rise of AI

Chris on October 10, 2025

Each year, the Future Trends webinar gives us an opportunity to pause, reflect, and take stock of where the future of market research is headed. This year’s discussion was especially striking. Artificial intelligence (AI) is no longer a distant prospect on the horizon; it is here, shaping how we work, think, and deliver value.

As with every wave of innovation, AI forces us to reckon with what we’ve learned from the past. The insights industry has already lived through its own growing pains. For years, the “race to the bottom” drove down costs but left behind an enduring problem with data quality. That legacy continues to shape how we approach the work ahead.

The challenge before us now is simple in statement but complex in execution: how do we ensure that new tools like AI serve as a force for higher-quality insights, not just faster and cheaper outputs?

The Legacy of the Race to the Bottom

The story of the last decade in research is, in many ways, the story of a marketplace caught in a cycle of underbidding.

To win projects, companies slashed costs, often at the expense of participant incentives. That decision may have been expedient in the short term, but the long-term consequences were significant.

Participants became fatigued, undervalued, and, in some cases, disengaged altogether. Fraud crept in through the cracks. The result was an erosion of trust in the data itself, the very foundation of our work.

At L&E Research, we saw this problem emerging early and took it seriously. We invested in “research-on-research,” asking participants directly about their experiences, not just with us but across the industry. How did incentive levels affect their willingness to participate? How quickly did they expect to be paid? How did they feel about moderation and engagement styles?

These weren’t academic questions; they were existential.

When participants don’t feel valued, the quality of insights deteriorates. That’s why we aligned ourselves with industry-wide initiatives through the Insights Association and built fraud mitigation into our processes well before it became the industry’s headline concern.

The race to the bottom is part of the research industry’s legacy, but it is not our future. Having acknowledged how we got here, we now have the opportunity to move forward with stronger footing.

Data Quality in the Age of AI

Today, the conversation about speed and cost has been reignited by AI. Procurement departments push for faster, cheaper research. Sales teams feel pressure to deliver. And once again, quality risks being left behind.

However, the tools themselves are not the problem; it’s how we use them. AI can accelerate processes, but it can also strengthen outcomes if we put quality at the center of our applications. The choice is ours.

At L&E, we’ve seen firsthand how AI can be deployed to improve accuracy while also saving time. A recent case study with our CondUX platform is a powerful example. A client asked us to analyze nearly 200 photos submitted by participants. Traditionally, this would have taken a team of humans more than 18 hours to review and categorize. Using CondUX’s object detection capabilities, we reduced the process to just two and a half hours, including setup and quality control.

The time savings alone were impressive, but even more importantly, the AI surfaced errors that the human reviewers had missed. By flagging low-confidence images for human verification, CondUX didn’t replace human oversight; it enhanced it.

This shift is significant. Qualitative research has long relied on asking participants to describe their behaviors and environments. Object detection allows us to observe instead. Rather than asking what’s on a kitchen counter, we can see it directly. Observation has always been at the heart of qualitative work, and AI now gives us new tools to scale it without losing authenticity.

The lesson here is clear: AI doesn’t have to perpetuate the mistakes of the past. If used wisely, it can reverse them. Instead of cutting corners on quality, AI can elevate it.

The Human Factor: Training, Oversight, and Storytelling

Yet even as we embrace new tools, one truth remains unchanged: humans are central to research. AI may be, as one panelist described it, “the best intern you’ll ever have.” But even the best intern still needs a manager.

AI can synthesize information, but it cannot think critically. It does not problem-solve. Left unchecked, it can amplify errors rather than resolve them. The risk of over-trusting AI is the risk of making high-stakes business decisions on faulty insights, a mistake no brand can afford.

That is why human-in-the-loop oversight is non-negotiable. Researchers must continue to bring context, domain expertise, and discernment to every AI-assisted output. AI may help answer “what,” but humans must still interpret “why.”

This balance between technology and humanity is not just relevant for today’s practitioners; it also defines the training of tomorrow’s researchers. Academic institutions play a critical role here. Just as earlier generations learned math without calculators, students today must learn the fundamentals of research without over-relying on AI.

If researchers don’t understand the basics, AI becomes nothing more than a “yes-person,” agreeing, generating, and emulating without questioning. Only those who have mastered curiosity, empathy, and storytelling will know when the machine is wrong, and more importantly, how to use it responsibly.

The future of market research belongs to those who can balance both: the efficiency of AI and the empathy of human interpretation.

Looking Ahead with Optimistic Caution

The insights industry is entering a period of remarkable transformation. Investment in AI and other technologies is accelerating, and the potential to make research faster, more scalable, and more accessible is undeniable.

Optimism must be paired with caution. If we lean too far into speed and cost, we risk repeating the mistakes of the past and recreating the very data quality challenges we’ve worked so hard to overcome.

The way forward is not about rejecting efficiency. It is about balance. AI should help us achieve all three points of the triangle: speed, cost, and quality, without sacrificing one for another. That balance is not easy, but it is possible. And it is necessary if we want our work to remain meaningful, relevant, and impactful.

What gives me confidence is the spirit of this industry. Time and again, researchers have shown the ability to adapt, innovate, and lead. We are not passive recipients of technology; we are active shapers of how it is applied. If we keep people – participants, clients, and researchers – at the center of our work, then tools like AI will not just make us faster or cheaper. They will make us better.

Shaping the Future of Market Research

The future of market research is not defined by technology alone. It is defined by how we choose to use it. The race to the bottom taught us that neglecting participant experience and data quality comes at a high cost. AI gives us the chance to learn from that history and write a different story, one where speed and cost efficiencies are balanced with quality, and where human expertise guides every technological advancement.

At L&E Research, we believe the path forward is not about replacing people but empowering them. With the right balance of tools and talent, the future of market research can deliver insights that are not only faster and more efficient, but also deeper, richer, and more reliable. That is the future we should all be working toward.

L&E Research and Qrious Insight Partner to Advance Behavioral Data Integration for Smarter Research

DWG Admin on October 1, 2025

Raleigh, NC: September 22, 2025 – L&E Research, a trusted partner in qualitative research recruitment and insights since 1984, launched a strategic partnership with Qrious Insight, experts in behavioral data and insights.

The collaboration integrates Qrious Insight’s passive metering technology into L&E’s panel apps and websites. This integration combines traditional qualitative and quantitative research with real-time behavioral data. Researchers and brands can now enrich surveys and qualitative insights by tracking app usage, website visits, ad exposure, search activity, and more.
For L&E, this partnership enables dynamic profiling of panelists based on actual behaviors, improving targeting and recruitment while unlocking new research capabilities and product offerings. It also creates a better consumer research experience through less intrusive engagements that offer passive income opportunities for consumers and patients, addressing data quality issues pervasive in the insights industry

L&E Research Perspective 

“We are excited to partner with Qrious Insight to offer research solutions that will disrupt the insights industry. Research began as an anthropological study of human behavior: we will now be able to offer brands and researchers alike the opportunity to both observe and ask consumers and patients about their brand experiences.
“Meanwhile, the number one complaint from consumers in qualitative research is the exhaustive questioning of their demographics and behaviors that rarely leads to opportunities to engage brands. This partnership will virtually eliminate this challenge. Brands are responding by focusing their data collection. Panel companies must do the same by investing in better solutions. This partnership is a win/win for everyone: Qrious, L&E, brands and consumers alike.”

Qrious Perspective 

“Market research has long relied on what people say, but behaviors provide a complementary, fuller view that helps close the say/do gap,” said Andrew Moffatt, CEO of Qrious Insight. “By building an always-on behavioral data network, we are creating a foundation for smarter research, strengthened analytics, and AI applications across the industry.”

 

About L&E Research

Founded in 1984, L&E Research is the leading expert in qualitative research and insights, trusted by top brands and agencies to create meaningful conversations between people and the brands they love. With a reputation for excellence via our 95% “highly recommended” ratings by clients post project, L&E continues to set the standard for brand research in the U.S.

About Qrious Insight

Qrious Insight is a leader in behavioral data, providing advanced technology that captures and translates digital behaviors into actionable insights. By partnering with organizations that have established, first-party audiences, Qrious builds a network of behavioral data that empowers companies to better understand and serve their customers.

Security and Quality Aren’t Perks. They’re Prerequisites.

DWG Admin on September 15, 2025

When it comes to choosing a research partner, trust isn’t a luxury.

It’s the baseline.

That’s why two questions should always be front and center:

  1. How do you protect my data?
  2. How do you make sure nothing gets missed?

At L&E Research, we believe that security and quality are non-negotiable for ISO certified market research.

That’s why we’ve invested in dual ISO certifications: one for information security and one for research quality.

Very few partners hold both. Fewer still bake them into every project the way we do.

What It Means To Hold Both Certifications

At L&E, we understand that trust in ISO certified market research comes from two places: data security and process quality.

These two ISO certifications work together to cover both.

  • ISO/IEC 27001:2022 is the international standard for information security management. It ensures that your data is protected through formalized policies, risk assessments, employee controls, and encryption practices.
  • ISO 20252:2019 is the international standard for managing market, opinion, and social research. It ensures that every research project is executed with consistency, documentation, and methodological rigor across all phases.

Holding both means that we don’t just protect your data or deliver your research well.

We do both, every time.

Fewer Than Five Firms Hold This Dual Certification

The Insights Association, through its audit body CIRQ, has certified L&E Research to both ISO standards.

Fewer than five companies hold both certifications for ISO certified market research.

The dual status is rare and difficult to achieve. It represents a deep investment in systems training, oversight, and continuous improvement.

When you work with a partner that holds this distinction, you choose a level of excellence that goes beyond standard vendor relationship.

Why ISO Certified Market Research Matters To You

When you partner with an ISO certified market research firm that holds both standards, you gain tangible benefits across security, quality, and operational efficiency:

  • Peace of mind for IT and compliance teams. ISO 27001 certification assures that every layer of your project data, from client records to video files, to survey data, is protected by one of the most respected security standards in the world.
  • Confidence in research integrity. ISO 20252 certification ensures your qualitative and quantitative research is managed with consistent documentation, governance, and methodological accuracy.
  • Faster onboarding, fewer surprises. Auditable standards reduce friction in vendor approval processes, especially in healthcare, financial services, and tech where security and governance matter most.
  • Proof of excellence, not just promises. These certifications are independently verified, maintained through regular surveillance audits, and publicly listed through CIRQ. They reflect an organization that holds itself accountable.

Trust Is Earned

When we say we’re built for your peace of mind, it’s not just a promise.

It’s a process.

One that’s been extremely audited, globally validated, and continually improved to meet the ISO certified market research standards you deserve.

Looking for a partner that holds itself to the highest global standards? Let’s talk.

The New ISO Standard Is Here. We’re Already There.

DWG Admin on August 26, 2025

At L&E Research, staying ahead of the curve isn’t just a business goal: it’s how we build trust. That’s why we are finalizing our certification to ISO 27001:2022, the latest update to the international standard for information security. This new version brings stronger safeguards, clearer structures, and more relevant controls for today’s digital landscape.
We are not waiting for a deadline to act. We are meeting the future of data protection now.

What is ISO 27001?

ISO 27001 is the international benchmark for information security management. It defines how organizations should structure, implement, and maintain safeguards that protect sensitive data. Being certified means our security practices have been reviewed and approved by an independent, accredited body through a formal audit process. For our clients and partners, it is a clear signal that we take information protection seriously and that we have the policies, procedures, and culture in place to prove it.

What’s Different about the 2022 Version?  

The 2022 update introduces structural and practical improvements to the standard. While the core principles remain the same, the refinements help organizations better align with modern digital environments. Here’s what changed:

  • A more streamlined framework. The original 114 controls have been reduced and reorganized into 93, grouped into four categories: organizational, people, physical, and technological. This makes the standard easier to manage and apply.
  • New areas of focus. Eleven new controls were added, including items like cloud service security, data deletion, and threat intelligence. These additions reflect the realities of today’s digital ecosystems.
  • Improved clarity and alignment. Language updates throughout the document make the standard easier to understand and integrate with other ISO frameworks, such as those for quality or risk management.

While the changes may appear technical, the intention behind them is simple: to make security stronger, clearer, and more adaptable.

 Why it Matters for Our Clients  

Our upgrade to ISO 27001:2022 is about more than keeping up with industry standards. It reinforces our promise to protect the data and relationships that power your research. Here’s what it means for you:

  • Greater assurance that your data is secure. The updated controls reflect current risks and ensure that our practices remain aligned with best available guidance.
  • Less time spent on vendor assessments. Certifications to the latest version  helps meet IT and procurement requirements faster and more efficiently.
  • Confidence that your partner is continuously improving. Our upgrade shows that we don’t wait for compliance deadlines to take action. We invest in systems that benefit you directly.

Part of a Larger Commitment    

L&E Research is also certified to ISO 20252:2019, the international standard for quality in managing research projects. Together, these two certifications represent our focus on protecting both the integrity of your research and the information it contains.
We believe security and quality go hand in hand. Our commitment to ISO 27001:2022 is one more example of how we bring that belief into practice.

Want to learn more about how our certifications support your research goals? Let’s start a conversation. 

Posts pagination

1 2 3 Next
L&E Research

Focus areas

  • L&E Health
  • L&E Consumer
  • L&E Legal
  • CondUX.io

Capabilities

  • Participant Recruitment
  • Full-Service Research
  • Research Integrity
  • Research Infrastructure

Facilities

  • Chicago
  • Cincinnati
  • Columbus
  • Denver
  • New York City
  • Orlando
  • Raleigh
  • Tampa

Keep in touch

Subscribe to our newsletter

Linkedin-in X-twitter Youtube
  • © L&E Research
  • PRIVACY