Lessons from Malcolm Gladwell’s “Talking to Strangers”

Hero Image: Lessons from Malcolm Gladwell’s “Talking to Strangers”

“Malcolm Gladwell could probably make a pencil sharpener interesting…”

Imagine what he can do with a topic far more interesting than pencil sharpeners – talking to strangers. Gladwell has made a career (in writing, and now in a podcast) of examining seemingly common things and presenting them in a new light. He is one of the few non-fiction writers to whose work the term “page turner” is applicable.

Market research as a profession is dependent on talking to strangers, so it would seem that this book would be a must-read for researchers. And it is. He examines the cases of several people in recent news stories (Bernie Madoff, Amanda Knox, Jerry Sandusky, Sandra Bland) and uses theories from psychology to shed new light on how interactions between strangers can go wrong, and what we can do about it.

Why not see how that can be applicable to market research? Let’s start with the ideas.

Truth Default Theory

Developed by Professor Tim Levine, and built on related prior theories, the foundational idea is simple – people assume that others are basically honest. This assumption is evolutionarily useful, and is a societally positive trait, since it is usually true. It allows us to live our lives free of paranoia and leads to efficient communication.

The problem occurs when deception enters the conversation. When one person is operating under the truth default condition, it is quite easy for another person to lie and get away with it. This is especially true when the deceit is occasional, and not a consistent pattern. Most lies are usually detected after the fact with other external corroborating information. We are not good at detecting lies in real time.

That brings us to the second theory.

Transparency

There’s a TV show called “Lie to Me” about a group of consultants who specialized in lie detection. Psychologist Paul Ekman’s foundational work on facial micro-expressions of underlying emotions was used (with a good amount of artistic license) to create an interesting show. But the idea that a person’s behavior and demeanor are a true reflection of what they are truly feeling inside (i.e., that people are transparent) has since been refuted.

Considerable evidence shows that people are, in fact, not transparent. And the evidence shows that in casual interactions even experts (such as trained law enforcement specialists) do little better than chance at detecting lies. Contrary to general understanding, nonverbal cues are not very useful. Though some people are bad at lying, they are far outnumbered by those who are good at it, and even they are not especially common (i.e., people tend to be honest). Hence, going by a person’s demeanor or behavior is not an effective way of getting at deception.

So, what does all this mean for market research?

Market Research Implications

Let’s start by not being too strict about the term “lying”, and think of it more broadly as being less-than-honest, as that is more applicable in the market research context. It could be intentional (as with sensitive topics), but more likely unintentional, as in low-engagement respondents.

Let’s consider quantitative (surveys) and qualitative (say, IDIs) research separately. Truth-default Theory applies more to the former, and Transparency more to the latter.

Quantitative Research

If most people are honest most of the time, then it is reasonable to assume that most of the time respondents are being honest in answering surveys. According to Levine, “Deception becomes probable when the truth makes honest communication difficult or inefficient.”

This should be familiar to survey researchers. Long, difficult surveys make it hard for respondents to answer truthfully and efficiently – by providing a motive to deceive. In this case, deception means answering quickly at the expense of providing a thoughtful answer. This may not affect all respondents equally, and may not necessarily lead to inaccurate answers at all times, but the risk is certainly elevated.

The simple remedy is to keep the survey within the bounds of what is expected.

Research in psychology shows that there is a small proportion of people who are prolific liars. It’s not unreasonable to think that a small proportion of respondents belong to this category. Or, that a small proportion are looking to deceive by gaming the system. Hence it is essential to have quality control checks with multiple ways of detecting such respondents through their data signatures as well as other technological means.

But there doesn’t appear to be a reason to think that survey respondents are somehow different in such a dramatic way that the prevalence of deceivers is high enough to invalidate the results.

By using good quality-control measures on top of proper sampling, good questionnaire construction, and proper use of data analytic techniques, we can have confidence in the results provided by survey research.

So, in the case of survey research, its inherent nature limits the stranger problem, and what we learn is that we are already doing what we need to. We just need to do it diligently.

Qualitative Research Is a Different Story

The deception effects discussed above are significantly mitigated because of the interpersonal nature of qualitative research. But what makes qualitative research susceptible is the potential to read too much into the data (i.e., the Transparency factor). This can be a problem when the research is done in-person or through video, where the variety of non-verbal cues increases significantly.

Gladwell cites studies that show how easy it is for people to believe that they have read non-verbal cues correctly – when in reality they do no better than chance.

And, in most of qualitative research, it is a single person (such as moderator) working alone. This lone operator may not have the benefit of being cross-checked by an independent entity. Theory falsifiability is a foundation of good science, and this is a case where that can be unwittingly violated.

One way to overcome the problem is to have two or more moderators working independently, but that is a practically infeasible and expensive remedy. A better option would be to find a way that can act as a validation check on the qualitative expert.

Text analytics can be a such a resource.

An obvious advantage with text analytics is that which is usually seen as its big disadvantage – the inability to “read the room.” The algorithm only reacts to the text, not the intonation, demeanor, expression, accent, or any other contextual factor. This makes it basically immune to misleading cues.

Gladwell provides interesting examples that support this idea. For instance, when researchers compared judges and a machine learning algorithm on who was better at making bail decisions, the results were strongly in favor of the algorithm (measured in terms of future crime committed by released defendants and reduction in jail populations).

The judges generally acted individually and had full access to verbal and nonverbal cues, while the algorithm worked on just the data that was fed in. The point is not that judges should be replaced by algorithms, but about reducing bias in the system by using non-traditional means (keeping in mind that algorithms themselves could have biases in dealing with social problems, based on how they are developed).

But in market research, it’s not hard to see how a qualitative expert working in conjunction with an analyst wielding a machine learning algorithm could mitigate biases and develop higher quality outcomes.

In Summary

In Talking to Strangers, Malcolm Gladwell explores how that act can go wrong with tragic consequences. He urges us to understand what happens when strangers meet, why it’s not easy to transform a stranger into a familiar figure – and urges caution and humility. But in the process, he helps market researchers understand how we may be able to overcome some of the biases inherent to our research approaches.

References

Gladwell, Malcolm, Talking to Strangers, Little, Brown & Company, 2019.

Levine, Timothy (2014), “Truth Default Theory: A Theory of Human Deception and Deception Detection”, Journal of Language and Social Psychology, May.

Kleinberg, Jon, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig and Sendhil Mullainathan (2018), “Human Decisions and Machine Predictions,” The Quarterly Journal of Economics, Feb.