Keeping It Clean: Fighting Fraud in Online Survey Research
July 7th, 2025
Introduction
Since the early days of online surveys in the late 1990s, data quality has been under constant attack. It’s a never-ending game of cat and mouse between fraudsters (or bots) trying to game the system and research companies trying to stop them. The fraudsters want to steal incentives with minimal effort, while research teams want clean, trustworthy data.
In recent years, the problem has gotten more complex—and more automated. AI-driven bots are now one of the biggest threats, capable of completing thousands of fake surveys at inhuman speeds. According to a recent study from Greenbook, as much as 15-30% of online survey traffic can be fraudulent, depending on the sample source and study design.¹
At TRC Insights, we take this threat seriously. That’s why we’ve built a proactive, three-step approach to maintaining data integrity: Identify – Investigate – Resolve. This process not only ensures data quality on individual projects but also feeds our evolving suite of automated detection tools. We have used this process for many years, and it is always evolving.

A Real-World Example: Market Mapping Research Study
The research team at TRC has many check points throughout a project lifecycle that find suspicious activity during ad-hoc studies, trackers, and longitudinal studies.
Let’s walk through a recent case from one of our B2B market research projects. We were conducting a Market Map Study for a large software client, targeting industry professionals. This was a multi-wave project with historical data, and we had the benefit of a consistent research team across all waves—something that helps spot anomalies quickly.
Fielding phase started off normally, but once we moved into an oversample that targeted a low-incidence group, things changed. The subgroup completed fielding very quickly, raising red flags on our research team. After discussions with our sample partner, we discovered that a new panel was being used to complete the subgroup. This revelation was enough to trigger a deeper look from both our research and project management teams.
Red Flags: What We Found
We found several clear indicators of bad data:
1) Typing speed anomalies: Some “respondents” had keystroke rates of 50–120 characters per half second, which is physically impossible for a real human being. Normal rates tend to be in the range of 1–5 characters per half second. Even more telling, the robot responses only typed for less than one second in total for the open-end —something humans could never do.
2) Demographic and device inconsistencies: We also compared responses across panel partners and against past waves of data. We found the most obvious inconsistencies in demographics and passive data (like device type).
3) Open-end removals: A staggering 76% of open-ended responses were flagged by our system. That’s an immediate warning sign. Our project manager manually reviews these entries every day to confirm the system’s judgment. In most cases, the system was correct and there were several clear indicators, such as being completely off topic, bad grammar, or not answering the question.
Resolution
We likely would have found the fraudulent data with our normal process, but the quick fielding enabled our research team to discover the issues in real time. We made the decision to remove 100% of the data from the new panel partner. While a few respondents may have been legit, we prefer to err on the side of caution—especially when our clients rely on us to inform major business decisions.
This case also fed directly into our continuous improvement process. Every incident like this helps us train our detection systems and refine our quality checks for future studies.
We’re always exploring new ways to catch low-quality or fraudulent respondents. That includes using tools like ResponseID, and more recently, analyzing the number of characters entered every half-second to flag suspicious open-end responses. Our goal is to make these checks increasingly automated, while still maintaining the human oversight that ensures accuracy.
At TRC Insights, we’d rather take a bit more time to deliver high-quality data than rush something out that could mislead our clients. In the end, their strategies depend on having the right information.
Looking Ahead
Online survey abuse isn’t going away anytime soon. But we’re not standing still either. From advanced detection algorithms to common-sense human review, we’re staying one step ahead of the bots in this game of cat and mouse. We also keep in close touch with our panel partners throughout the fielding process and stay knowledgeable about trends in the area of digital fraud.
We’ll keep fighting the good fight, so our clients get the real story—not robot noise.
¹ Source: GreenBook Research Industry Trends (GRIT) Report 2024