The lies we tell ourselves about insights

For our industry to evolve, we need to reward truly desirable behaviors

This is the second piece in a series about lies we choose to believe, which began in the previous edition discussing authenticity.

We live in difficult times to understand, with rapid and constant changes and a fragmented understanding of reality - this is a context in which the insights industry should be thriving, and some argue, even have C-suite representation in companies. What incentive alignments exist that prevent this from happening as it could?

Are we applauding nonsense?

The bankruptcy of our collective critical thinking as a side effect of our media consumption seems to be showing up in insights and research too:

  • Content production is perhaps the most powerful tool for image and reputation building in B2B today and consequently, for acquisition, especially for challengers - just look at discussions about thought leadership, founder-led growth, etc., regardless of company size. This is partly because in the last decade we've saturated all possible outbound channels, and truly segmented paid media (LinkedIn Ads) is expensive and not everyone gets good returns from it.

  • Public interest in consumer behavior and trends is both a blessing and a curse - it's what Scott Galloway would classify as a "sexy job" - one that attracts attention but doesn't necessarily provide proportional compensation, and what Grant McCracken says attracts the wrong people for the wrong reasons (essential reading for insights people!). It's like what happens during the World Cup: suddenly everyone becomes an armchair coach, convinced they could outstrategize the national team's actual coach - typically someone with multiple championships and decades of expertise. Consumer behavior is a subject that suffers from the same problem: even those who don't have the slightest idea what they're talking about feel entitled to give opinions, and on social media, it doesn't matter if there's substance or truth, only if it generates identification and resonates.

  • Typical B2B audiences are dramatically smaller than B2C ones. No short video about attribution, CRM, or more technical marketing topics can achieve reach, engagement, or audience like a more general topic such as a makeup tutorial, with rare exceptions like Chris Walker. The solution part of the market found was to create more superficial content focused on engaging laypeople and the general public, fundamentally repackaging things people already believe or things that have always happened as if they were new.

The problem with content becoming so important is that we've gone from technical (but boring!) formats like webinars to discuss important topics to a content production and consumption dynamic closer to B2C - shallower, faster, with more general appeal and with a huge perverse incentive to produce poorly thought out, misleading but highly engaging nonsense.

Want to see some examples? A list of some I've already deconstructed here before:

  • Tradwives as a trend: Number of views ≠ behavior adoption, not everything we watch inspires us or we want to emulate, or, still, the fundamental dynamics of social media: outrage is very engaging. And the entire trail leads to a single BBC article that spoke with two women.

  • Mini-retirements of Generation Z: At a time full of important issues about the professional future of young people such as NEETs (Not in Education, Employment, or Training) and the risk to junior and entry positions in intellectual work, what gets attention is this nonsense. The evidence that exists: two testimonials from HR people and some viral videos, all coming from the US, where there is no paid vacation by law, except in 4 states. Coincidence?

  • Generation Z "doesn't drink anymore": First, the myopia: if Generation Z is 13 to 28 years old, a large part of the block can't even legally drink yet. Second, the old premise of "if it happens in the US/developed world, it will happen in Brazil next" - the problem is that we only count the times when this happens but ignore when it doesn't. Third, from the largest study that covers the subject in Brazil, with 9,000 telephone interviews in all macro-regions of the country, with questions about past behavior (last 30 days) and not future intention (to stop, reassess consumption, etc.): 18-24 year-olds are the most likely to drink excessively when they drink (25-34 year-olds are very close), it's the second age group with the highest prevalence of abusive consumption and, according to historical data, one of the groups in which abusive consumption is increasing, not decreasing (though numerically, not statistically). They are also the most likely of all age groups to have had an episode of alcoholic amnesia in the last 12 months, the most likely to have drunk upon waking up to help with a hangover, and to consume 6 or more drinks on one occasion at least once a month. They are drinking, indeed!

The traditional press is indeed in crisis and has been in this game of clicks first and journalism second for a long time, which should make us use secondary data much more carefully. The problem is when those who should separate the wheat from the chaff replicate this kind of nonsense or use it as evidence in their analysis!

As a historical figure whose name I won't repeat here said, a lie repeated many times becomes truth.

The seduction of oversimplifications

But it's not really news that people have been selling oversimplifications as insight, and it's not necessarily just the result of algorithms.

A certain French anthropologist, who unfortunately is still celebrated in some circles, wrote a book where he claimed to have understood the hidden cultural code of humanity, which shapes perceptions, behaviors, and emotional responses - the key to understanding everything related to consumption! How was the fieldwork that originated this "revolution"? There wasn't any - he based it on Jungian archetypes, reducing complex cultures to one or a few codes, and of course, reinforcing stereotypes: Germans are (all) (only) disciplined, French are (all) (only) hedonistic, and so on. Kind of like the Instagram branding folks who say your brand isn't taking off because you chose the wrong archetype.

The contrast with the understanding of nuance and the intellectual humility of a Darcy Ribeiro, who dedicated his life to studying the formation of our people, is scandalous. That alone should be reason for skepticism. As Carl Sagan would say, extraordinary claims require extraordinary evidence.

But the industry swallowed it whole. And we're not talking about the general public or the less educated—we're talking about executives at massive global companies, even though he was eventually (partially) exposed for lying about his resume. What does this reveal about our gullibility and our hunger for magical solutions to deeply complex problems? Amazing how a thin veneer of intellectualism can grease the wheels of deception, isn't it?

I won't tell you who he inspired in Brazil, our lawyers wouldn't let me, but I'll cite Grant McCracken for the second time in this text because he made a much more restrained and elegant critique of this guy's work than I would.

Insights: (should be) the "fact-checking" area?

Our industry should be more like Neil deGrasse Tyson or Bill Nye - experts who make the complex accessible by translating evidence-based ideas into practical applications for clients. Instead, too many of us have adopted the influencer-doctor approach, peddling testosterone pellets as 'hormonal optimization' and think the right business model is telling clients what (they think) they want to hear or whatever has more engagement potential on social media. The result? Viral TikTok videos being passed as 'movements,' a “trend” consisting of half a dozen people in Brooklyn and as producing 'insights' about generations that were not even born yet, and treating diverse groups of 50 million people as if they were a single homogeneous entity. This isn't insight - it's intellectual malpractice.

If we're truly in the business of delivering actionable insights useful ideas based on data (and yes, rigorously gathered qualitative data counts!), our greatest value comes from confronting prevailing opinions, baseless narratives, and silent consensuses with hard facts and critical analysis. This productive friction is what hones strategy to a razor's edge. While we love to sell those 'aha!' moments of discovery, the real worth of our work often lies in asking 'Wait, have we misunderstood this entirely?' It's about weaving together disparate methodologies and diverse perspectives to extract meaning from complexity - a process perfectly captured by sensemaking, a powerful concept that's sadly fallen by the wayside in recent years.

Sensemaking is applied critical thinking in action. Sensemaking means connecting dots between seemingly unrelated data from diverse sources in non-obvious ways, helping decision-makers distinguish between “seems like” and “really is”. Sensemaking is an inherently multidisciplinary idea, even though there are anthropologists, psychoanalysts, behavioral scientists, and neuroscientists who want to treat human understanding as a monopoly of their disciplines.

We're in the business of "hard truths" and lateral thinking, but the market buys convenient lies in bulk.

If scale were synonymous with quality, we'd eat fast food every day

Good pizza is rare, even though the method to create it is well known.

Any efforts to make it more convenient, cheaper or easier will almost always make it worse.

If you think this post is about pizza, I’m afraid that we’re already stuck.

Seth Godin

For more than a decade, most of the insights industry has been putting scale ahead of quality. Undoubtedly, there is a customer market that wants to pay for predictability and circumstances where speed really is what matters most. Perhaps many don't know how to evaluate quality. The problem is thinking that because fast food is well known, always the same, and fast, you should eat there every day.

In the meantime, some problems persist, spread throughout the market:

  • Insufficient quality controls and fraud in online panels - a problem that the industry itself created with the race to the bottom! After this bomb (ten years and $10 million in fraud!), will end users of research pay more attention to this?

  • There’s been a proliferation of companies selling operational capabilities rather than expertise. Think about it: will an online panel provider or research tech platform ever advise you that phone or in-person data collection would yield better results? Of course not. This approach shifts the methodological decision-making burden entirely to the client—who often lacks sufficient background knowledge to make these technical choices effectively. Is there a lack of alternatives that are simultaneously full service and lean?

  • The market simultaneously expects quality data yet is willing to buy research in a SaaS subscription model, ignoring the hidden cost of stuffing the same few participants with questionnaires while their rewards are reduced. Who do you think they squeeze to make it viable, their margin or the operational cost?

  • Biased recruitment with opaque processes and no bias control in qualitative studies - the end result is obvious, but there are plenty of people who don't understand the importance or who pretend not to see.

  • Participant engagement and experience being consistently a minor concern and at the same time, something that demonstrably impacts quality. The bar has been lowered so much that now there's an international standard. The more we treat this as an accessory issue and not a central one, the more quality falls and the perceived value of the work and category go with it.

The parallel with the idea of enshittification, which I've also written about, is direct: like the major digital platforms, enterprise clients (the end client who wants to pay as little as possible, is in a hurry, and often doesn't know how to evaluate quality - at our end, the worst type) are privileged in the short term at the expense of the users (the research participants) who are ultimately the raw material without which the work doesn't happen.

Each endorsement of this model is a vote in favor of factory farming of human insights - are we defending models we truly believe in with our budgets?

And what is the great solution proposed for these challenges?

In the midst of the general euphoria about AI, the current hype of synthetic data only reinforces that, for some, the race to be faster and cheaper and to shove AI into parts of the process where it perhaps shouldn't matter more than solving the real problems. Wouldn't it be reasonable to use today's technological possibilities to solve the problems that still persist? AI helps with many things, but it doesn't do metaphysical transmutation yet.

This isn't necessarily an argument against synthetic data, partly because this is a complex discussion, full of nuances and with very different performances depending on the use case (and extremely biased benchmarks to measure effectiveness!), but against selling it as a panacea, indiscriminate use, and using it as a way to manufacture consensus to eventually remove humans from the equation - in my humble opinion, the biggest risk. The same strategy of manufactured consensus of "online panels are good enough, whoever is against it is a Luddite" without really knowing what's inside them that brought us to where we are in price anchoring and leveling down - I've talked about this before too!

It turns out the "Luddites" were right - so much so that most studies where coverage is fundamental, like election polling and the Consumer Price Index, continue to be done using more comprehensive collection methods (and there are countless cases where this is true for the market as well), but which are indeed more expensive and slower. Why? Because they're still the best alternative! These are choices that need to be more thoughtful and less automatic. The difference now is that we have (much) more to lose!

One thing that's already apparent in some uses is that the discourse of those who sell is different from that of those who have tried it, especially when we talk about technical users and researchers - we have to carefully choose who we're going to listen to.

It's kind of like the chocolate industry thinking the great revolution is hydrogenated vegetable fat. The good side is that this is exactly the market context that sets up conditions for a Dengo or a bean-to-bar wave to start...

This isn't a discussion about using AI or not - there are already incredible things in our daily lives involving transcription, translation, data visualization, and analysis support tools that accelerate and improve deliveries, reduce operational work, but don't negatively affect quality or put the process in a black box. It's about where, how, and why to use it.

And to top it off, the elephant in the room: this search for scale ultimately serves to sustain bloated and inefficient structures and generate value for partners or shareholders, not necessarily to deliver more quality to the client. Do the cobbler’s children have no shoes when it comes to customer centricity? With leaner structures and more discerning clients, would this subject still attract the same attention?

"Bureaucracy is expanding to meet the needs of the expanding bureaucracy." -

Oscar Wilde

Don't confuse me with the facts

Rubbing more salt in the wound, what not everyone knows is that a large part of the work of larger consultancies and research companies is done in white label arrangements with smaller consultants or companies. In theory, everyone is happy: the smaller company pocketed six figures, the larger company that outsourced and sold the project pocketed seven, and the end client who privileged brand instead of delivery to protect their position and have someone to blame if the project goes wrong also. In practice, it seems the end client gets the worst part of the deal and may be making a very irrational choice, especially if the idea was to pay for brand.

Now explain to me, if you're paying for size or brand as a proxy for trust, predictability, or low risk, how is it to know that your supplier is outsourcing to a smaller one and that you could pay much less for the same delivery? At this time when the discussion about the manufacturing and origin of luxury products is being so discussed, isn't it time to reevaluate?

We've known for a long time that the variables that affect quality in insights are origin and quality of data and technical capacity, creativity, and applicability of analyses. If buyers already know this, why are the incentives so misaligned? Is it habit? Death by consensus in a decision that passes through many different pens? Fear of change? Stockholm Syndrome?

Is there an alternative? There is, but we need to understand our priorities when choosing.

More operational efficiency can also be using a lean or modular structure centered on the expertise of the executors and the quality of the raw material to ensure that quality comes first without the cost being eye-popping or Veblenian. To follow with the gastronomic analogy, just look at Jay Fai or Izakaya Toyo. The two founders are still in the kitchen working very hard, have enormous zeal with ingredients, deliver incredible value for money, and the public, generally foodies with more gastronomic repertoire, ends up being who understands and wants to pay for quality.

In this context where we can automate more and more of the repetitive and operational work, this makes even more sense.

Can we transpose Dieter Rams' principles of good design to organizational design in insights and be more like a F1 team and less like a transatlantic doing a handbrake turn? I argue that we can, but then who buys needs to reflect more on why they are buying, especially because...

Status and affiliation are poorly disguised decision factors in B2B

It's raining on wet ground to say that B2B purchases are less rational than they seem. Fear is the obvious emotion that guides many purchases, encapsulated in the well-known phrase "nobody ever got fired for buying IBM." But status and belonging play an essential role in B2B - just look at the size of the Brazilian delegation at some international events and the tsunami of posts during and after. Is it about the content? Tangentially, but clearly not only and it seems to be less and less the main motivation. How many corporate decisions that are actually about status and belonging have you seen rationalized as investment? How does this affect the choice of partners in your company?

Qualitative is fundamental and generally where most of the great discoveries really happen, if we do it well

We close our presentations with "It is by logic that we prove but it is by intuition that we discover," a quote from Henri Poincaré, a mathematical genius, theoretical physicist, and philosopher of science, famous for having a broader view of both mathematics and science as a whole, treating scientific theories not as mirrors of reality, but as conventions that are tools we use because they work better to understand a certain problem and not out of dogma or epistemological affinity. In a way, his thinking contributed to the idea of mixed methods, which came later. I think this already says a lot about how we treat the subject around here.

In eventual market cooldowns or pressures for speed and costs, there are researchers who say "we'll need to relativize rigor" or "Oh but we're not saving humanity or doing a clinical trial." No, but we're supporting business decisions that don't just affect clients' finances, but the lives of thousands of people, often critically. The errors can be costly financially, reputationally, and socially or lead to chain reactions, like doubling down on a flawed hypothesis. The old adage of garbage in, garbage out undoubtedly applies to qualitative as well.

We can't have so little responsibility for the result of our own work and then preach to others on purpose, active listening, systemic vision, and regenerative design.

Moreover, intuition, contrary to what some believe, is not a magical power, but rather an accelerated process of decision-making based on previous experiences and consolidated knowledge, and its quality derives from these two things - but it's far from infallible.

Our brain is a wonderful pattern recognition machine. The problem is that it's so wonderful that it often sees patterns where there aren't any, like Jesus drawn on toast, 11:11 on the clock, a puppy in the cloud, a butterfly in a Rorschach test, and so on. That's why...

Theory and hypothesis formulation needs to be more than entertainment

I've already written previously about the importance of looking at things that numbers can’t show. Here, we're talking about the other side of the coin. This issue of external validity is often a minor concern in the strictly qualitative way of seeing the world. That's why the idea of triangulation is so important - because many "mono" methodological and disciplinary views will often have serious blind spots or be poorly generalizable.

The very choice of words in the form of communication already reveals the intention and capacity for self-criticism. Someone who recognizes the limitations of their own methods will say "it seems," "the data suggests," "it may be that" or even present the findings as a question. It's the antithesis of "Generation ABC wants D," "The new era of XYZ has begun," "X is the new Y." Sounding confident can help sell and be persuasive, but where do we draw the line? Isn't the very dynamics of content production an incentive to proposing theories without rhyme or reason, playing mainly on our biases of novelty, confirmation, and representativeness?

Quantitative: we need to learn to read the nutrition labels

Month in and month out, a study appears in the press like "Brazilians are willing to spend more on sustainable products." Years come and go, landfills continue to fill, Asian fast fashion continues to grow double digits, and the category leaders largely remain the same. How is this possible?

It's the oldest problem in the world in research - called the say-do gap and there are various technical strategies to get around this that this type of study consistently ignores. Citing just the obvious ones:

  • Avoid or rephrase questions for which there is a morally correct answer. Social desirability bias!

  • Questions about specific contexts and circumstances are always better than general questions - maybe the person is willing to spend more, say, on razor blades but not on deodorants. They respond thinking of a general context that doesn't exist and you're left with data that says nothing.

  • Measuring past behavior in a reasonable interval is always more accurate than measuring future intention. I may have the intention of going to the gym five times a week, not eating fried food, and saving 50% of my salary, but is that what I actually do? We're terrible at imagining our future selves - projection bias and empathy gap.

There is an endless amount of books published on how to write questionnaires, phrasing of questions, and a lot of people who think their freestyle is better than proven methods (hello Dunning-Kruger effect!), or have a deliberate intention to torture the data. All these things are research fundamentals. Why do we let people who don't know this write questionnaires? This is one of the reasons why I think it's terrible that we call the idea of untrained people conducting research "democratization" - it paints us as oppressors, elitists, gatekeepers, but the reality is more like letting unlicensed people drive or do structural renovations without an architect. One could argue that there are many cases where it's better not to do it than to do it badly so as not to create false certainties.

“It’s ok, I watched a YouTube tutorial on how to run the project”

Then when evaluating quality, people look at who did or paid for the study and at most the sample size, but the technical side, including question phrasing, sample distribution, opaque methodologies, etc., which is where the dirt hides, goes unnoticed. What incentive exists for better studies to come out if the scrutiny doesn't happen and if the quality requirement is minimal?

But now, how do we solve it?

In a very editorial and opinion-based view, the scourge of this century (so far, at least), greatly enhanced by the context, seem to be confirmation bias and the Dunning-Kruger effect. Unlike the scourge of the 19th century and the romantic poets, these two don't have a vaccine and depend on each person's critical thinking and are about looking inward.

The problem is when we think that only humans out there are credulous and fallible and we are not. The problem is thinking that there's only misinformation in politics, public health, investment recommendations, and crazy diets. The problem is that all these comfortable lies blind us to the things that are really happening and impact our businesses, the brands under our responsibility, and our careers.

When we watch documentaries or hear stories about cult leaders like Osho and Jim Jones or notorious charlatans like João de Deus, we get indignant about how people didn't realize sooner that they were dealing with ill-intentioned, lying, or abusive people - because the cost of recognizing the lie increases the longer we spend believing it, a manifestation of the sunk cost fallacy. For the spell to be broken, someone needs to show us (and we need to be open to hearing!) that the emperor has no clothes.

Making better decisions involves recognizing our fallibility and stopping to listen to those who challenge us, not those who say what we want to hear and appeal to our worst instincts - with this in mind, we can reward the behaviors that lead us where we want to go. To close this edition, one last recommendation: Alain de Botton (the philosopher, from the School of Life) on Harry Stebbings' 20VC, in an interesting discussion about marketing that tangentially touches on the place of insights in the world and in this text.

Thank you for reading to the end and see you in the next edition!