For the 6th consecutive year, industry experts from Microsoft, Procter & Gamble, Greenbook, and L&E Research gathered to discuss the state of the market research industry and its challenges, particularly in relation to data quality and authenticity.  

Generative AI has been the hottest topic of conversation throughout most of 2023, so we jumped right into it, head first.  One might suspect that the discussion would be focused on the unethical use of generative AI when research participants are completing surveys and screeners, but Lenny Murphy shared an example that was far more nefarious.  When analyzing the latest results from the annual GRIT survey, the Greenbook team identified over 600 responses that had been AI generated, which comprised almost 20% of their overall survey completions.  Suppliers should have techniques and processes in place to mitigate respondent dishonesty, but what we didn’t expect to see was fraud committed by organizations within our industry.  Panelists who attempt to defraud research do so primarily for one reason: the incentive. Although the GRIT survey is not incentivized, Charlie called attention to the fact that the GRIT results influence corporate budgets and the technology solutions that clients look to engage with. So by training AI to auto-complete the survey, these companies are attempting to skew the results in their favor to increase their business opportunities in an incredibly unethical manner.  In fact, this level of deception is more appalling than a consumer participant trying to make a quick $75 bucks.  The greatest challenge and concern is that the AI-generated responses were difficult to distinguish due to their contextual accuracy and depth.

Continuing the conversation around data quality, Lenny then called attention to fraudulent or inauthentic participant responses within the industry.  He mentioned that around 70% of sample responses might be discarded due to authenticity concerns, and shared that we have an urgent need to address the crisis of data quality and authenticity within the market research industry.  

What are some things research buyers and designers can do to improve the quality of research collected?  Lenny, Barry, and Charlie all made suggestions that may help to mitigate fraudulent data, such as:

  • Embedding non-conscious measurement in surveys to ensure responses are genuine

  • Implementing measures to ensure data quality before surveys are even conducted, using metrics and quality assurance steps to detect potential fraud and improve data integrity

  • Employ strategies like red herring questions and other techniques to identify and remove fraudulent responses

  • Shift focus away from the race-to-the-bottom pricing mentality and instead prioritize high-quality sample providers with whom trust has been established

Generative AI is a powerful tool, and will do a lot of good.  Better algorithm and hardware development has led to more processing power, and manufacturers of neurotechnology devices are making rapid advancements as it relates to scans and wearable devices.  Device manufacturers are looking for ways to implement AI sensors into wearables, such as earbuds, smartwatches, VR headsets, and other consumer based technologies.  While this technology is still in its infancy, and currently these devices can only measure and pick up low frequencies, the technology will get better, stronger, faster.  This will allow practitioners, specifically in the medical field, to monitor brain functions in ways that we probably have not yet realized.  Humans have limitations, but computers can process and execute millions of data points and provide analysis in a fraction of the time.

Device manufacturers have also started exploring ways to use devices to regulate moods.  For example, an implant can suppress depressive thoughts, like a pacemaker for the brain, therefore enhancing the quality of life for those who suffer from clinical depression.  There may come a day when an implanted brain chip could control devices, which could be life-changing to someone who is paraplegic and unable to do so using their own extremities.  Modern AI utilizes neural networks modeled after biological brains, and AI is creating, or rather, generating, more like the human brain would.  In our industry, researchers are already using AI tools to analyze, synthesize, summarize, and to some degree, predict human behavior, leading to insights, but will there ever be a time when we can use brain scans to gain those same insights?  Will our understanding of the human brain, supported by AI, ever advance to the point where we can simply look at a brain map and extract the “why” behind consumer behavior?  At its current stage of development, AI’s potential lies in aiding human researchers, and should be treated as an assistant or a new hire, requiring careful review and validation of its outputs by human researchers.

All this, of course, brings up a very important question: Whose job is it to regulate AI development?  According to a survey published earlier this year by Ipsos, survey participants say 53% believe it’s the company’s responsibility, and 44% believe it’s the government’s responsibility.  Regardless of who makes decisions regarding AI regulation, we have to make good choices early – we won’t be able to backtrack once the proverbial cat is out of the bag.

AI is going to create new challenges that all panel suppliers will need to recognize and resolve.  As generative AI gains popularity with the general public, we are likely going to see an increase in use of AI when answering open-ended question types.  Articulation is how we gauge the level of responsiveness and engagement when searching for the best potential participants.  If their articulation response is generated by AI, the candidate may look good “on paper”, but it’s likely the articulation is false and may not be truly indicative of a quality participant.  Another challenge that we’re likely on the precipice of is participants using AI to complete asynchronous research activities.  Safeguards on the technology side need to be implemented, such as fraud detection tools, so that researchers are alerted to suspicious responses that may be AI generated.

This does increase the need for honest and transparent communication.  When situations such as these arise, clients should be extra diligent in sharing these details with their project manager.  Suppliers don’t want to provide clients with panelists who are only interested in shortcuts to a quick incentive, and clients aren’t interested in these candidates either.  This must be a collaborative effort of “tattle-tale” to make sure the bad actors are not used for other research.

The key will be understanding what motivates participants, taking steps to work to better the participant’s experience, and for all parties involved to collaborate and embrace best practices as it relates to participant experience.  This needs buy-in from everyone: brands, agencies, researchers, technology providers, and panel providers.  

So what are some things the industry can do to increase overall panel ecosystem health?

  • Write better/concise screeners: A screener should have one objective: to qualify or disqualify a recruit.  When scripting recruitment screeners, there’s a simple rule that few follow, but all should.  If the question isn’t determining an inclusion criteria or a quota, then it should not be on the screener.  This is not an opportunity to do quant work, and the time for gathering information is in the research session, not during the qualification phase.  What should a supplier do when extraneous questions are included, but the researcher insists “the client wants to know” in advance?  A good supplier will make recommendations that are beneficial to all parties involved.  A supplier has a responsibility to both their client and the participants with which they engage, so they must find the right solution to both ensure ecosystem health and fulfill the researcher’s objectives.  Recommend collecting “profiling” questions in a re-screener, right before the session begins, or in the session itself.  Alternatively, a “Getting to know you” assignment completed asynchronously after the qualification phase is a great way to minimize screening while still collecting data prior to the session.

  • Target the audience: A strong provider will have a powerful system designed to account for a multitude of participant variables, such as the ratio of screeners completed vs screeners qualified, or when someone last qualified for a study, and of course, the ability to track static data points, such as gender, age, ethnicity, etc.  Accounting for these variables, the project manager will always target the ideal candidates first, therefore minimizing the overall outreach volume needed to fulfill recruitment, ergo reducing overall screener fatigue on their panel.

  • Balance your audience with high incidence targets: Sometimes there’s a disconnect between the audience a brand wants to engage for their research and what is viable for panel suppliers to provide.  I’m not implying brands shouldn’t try to engage with their low incidence audiences – these customers may be key to finding the insights necessary.  However, projects can be very successful when conducted with a broad and balanced audience including both low and high incidence targets, for example brand users (low) vs category users (high).  If the recruitment supplier is unable to secure the expected net for a low incidence audience, and options have become scarce, the researcher should be ready to offer flexibility and adapt their research.  While this pivot does require additional steps, for example, updating the discussion guide to target an alternate audience, the researcher may find insights they weren’t expecting by engaging an audience they weren’t considering, and the adaptation could be an incredibly successful step in meeting their research objectives.

  • Create a custom panel with your supplier: Many clients rely on a panel built for their unique research needs. A few examples…
    A researcher would like to conduct a series of interviews with a broad target audience, but each phase of research may be targeted, so a custom panel is built implementing the overarching criteria of that audience.  From there, the research team can strategically apply a few screening questions to determine which recruits may be the best fit for a variety of research phases.  Breaking the screening process into separate conversations reduces panel fatigue because inclusion in the custom panel means that the candidate will be participating in some variation of this large multi-phase project, and they don’t feel like they’ve wasted their time on a qualification process for which they aren’t eligible.  In fact this may make them eligible to participate in a few phases of this large project, making this a much more satisfactory and lucrative process to the research participant.
    Another example is utilizing a qualitative panel to conduct quant-qual hybrid research recruitment.  This can be a very cost-effective solution for projects that require the highest quality for both phases.  Your recruitment provider can supply a targeted sample for your quant survey deployment, and based on the responses received, qualitative recruits can be picked from this larger pool (or vice versa).

  • Compensate fairly: Treating participants respectfully means fair compensation for their time.  While budget is always a variable, trying to skimp on participant incentive isn’t where researchers should trim costs.  This is bad for the overall ecosystem, and will reduce response rates for your project.  Low incidence paired with a poor incentive may mean the recruits that are willing to participate aren’t of the highest caliber – you may get what you pay for.
    If the incentive budget is restrictive, find other ways to compensate participants.  Are you testing a product, and if so, would it be an option to allow the tester to keep the product?  Allowing them to keep a device with a market value of $200 will be an alluring incentive when combined with a less than alluring cash incentive.

  • Share “bad actors” and validate panels: If we identify that “Joe Schmo” has been cheating the system, then it’s our job to put a stop to that.  We may have recruited Joe for a study, only to see his face again three weeks later when he was recruited by a competitor to participate in a study held at our facility.  Joe has clearly falsified his last participation, so removing him is beneficial to us, to our competitor, and most importantly, the brands who trust us to provide honest recruits.  It’s also beneficial to the panel ecosystem, as he could have been taking a seat away from someone else who honestly and legitimately qualified.  Of course the challenge is maintaining protection of PII – how do we share bad actors while still maintaining industry standard security measures?  Who manages the list of bad actors?  Further complications arise when bad actors are often deceptive enough to not use their real identity, so our “Joe Schmo” could be our competitor’s “Mo Schmo”.

Our industry organizations cater to research buyers, providing information and helping them to navigate emerging technologies and suppliers that can provide solutions necessary to amplify their research.  Industry organizations have started to explore better practices as it relates to data quality, diversity and inclusion, and the panel ecosystem, but more should be done as it relates to engaging audiences to participate in research.  In short, we need PR for our industry.  How do we increase information that market research is not a scam?  This is the natural reaction to project outreach when expanding efforts beyond existing opt-in panels.  Furthermore, how do we expand our reach about what market research is, how it is beneficial to individuals and brand development, and how both parties play a role in that process?  This responsibility has been managed by panel providers, but as we discuss how the industry should collaborate, perhaps it’s time to raise awareness and explore ways all stakeholders can positively contribute to better recruiting practices and better participant management.

As the industry continues to shine a light on data quality issues, it’s important to remember that every little action counts.  We all have a role to play as it relates to bettering the panelist market research experience, creating sustainable solutions, and making constant adaptations to maintain the credibility of research outcomes.

We hope you found this summary to be helpful!  If you didn’t register for this webinar you can watch this webinar in its entirety by clicking here.

Be on the lookout for our next webinar, which will be in Winter 2024.  If you can’t wait until then, you can always view our on-demand webinars.  Don’t forget to join our mailing list so you can keep up with what is happening at L&E!