It’s 11:30 am on a Tuesday. You are already in your third Teams meeting, and you are nowhere on your to-do list.
This time, you are in a product review meeting, attended by the product owner, engineering lead, customer success, sales, and marketing teams.
On your second screen is your slide deck. Slide three presents the insights from qualitative customer research you conducted: over 30 customer interviews, along with a hypothetical positioning statement to be tested within the team.
As you look through it, the product owner is walking around the room through the Q1 Net Promoter Score (NPS) data. The sales rep jumps in with feedback on last week’s sales-qualified leads. Marketing follows, sharing results from a two-week A/B test on paid campaigns.
Everyone’s input is backed by research, each aiming to influence the product roadmap. Somewhere in the middle of all of this chaos is you, quietly toggling through slides and screens, coffee in hand, observing fragments of research and watching insights compete for attention and alignment. Everyone has data. No one has alignment.
Across the teams I’ve worked with, one pattern has become increasingly clear: access to research is supposed to enable smarter and more effective work.
Instead, in large SaaS companies with multiple stakeholders, it has created a new problem: not a lack of data, but the challenge of synthesizing data to tell one story. This is what democratised research looks like in practice across many SaaS companies.
The rise of democratized research
Democratized research simply means allowing stakeholders to participate in research, making it more accessible and inclusive. This means product managers, marketers, and sales reps can conduct user studies and derive actionable insights.
Compared to a decade ago, during the era of the service bureau, product managers had to request insights from the market research team because it was gatekept.
The challenge with this was that it took four to six weeks and came as a huge report that sat on the shelf. Research was slow, expensive, and centralised.
Now, AI and tools have changed that. Stakeholders can conduct their own research, but this shift has introduced new risks.
While working on an event tech product, I saw this firsthand during a feature launch. The feature allowed customers to match with others interested in similar events, pairing users based on shared interests.
On the surface, that was a brilliant idea.
- The product owner had adoption data showing that customers who attended multiple events had higher retention.
- The marketing team had campaign results revealing engagement in social connection messaging.
- The community management team had a folder of customer feedback requesting the feature.
- The founder was convinced this was the right move.
On paper, this looked like alignment. Every signal pointed in the same direction. Every stakeholder had come to the table with research. However, the engineering team raised concerns about the technical complexity of building a reliable matchmaking system.
What nobody did was synthesise all the information and ask one question: are we building this because the research validates the founder’s conviction, or because it solves a real customer’s problem? The difference matters because one is a synthesis while the other seems to be a consensus.
Within a few months, adoption fell short of projections. Not because the research was wrong: nobody had synthesized it into the right question before making the decision.
This is not only a startup problem. This happened similarly with Quibi, a short-form mobile video app. Having gotten $1.75 billion in funding, it closed down a few months after launch.
What they lacked was someone to synthesize the data and ask: why would someone pay for a separate app to watch short videos when Instagram, YouTube, and TikTok already lived on the same phone? As TechCrunch noted, Quibi had a compelling thesis but failed to translate it into product-market fit.
Different companies. Same failure.
The dangers of democratised research
These are the dangers of democratized research, but it can manifest in other ways at your company.
Poor insights
When many stakeholders have the capacity to conduct interviews and surveys, and interact directly with customers, there’s a risk of presenting surface-level observations mistaken for insight.
Insights that do not represent the why but merely the activity of research. These poor insights lead to poor decision-making. This usually happens when stakeholders are not trained to conduct proper customer or market research, or when they conduct research to prove they are customer-centric.
The validation trap
Another danger of democratized research is that stakeholders validate their preconceived notions, undermining objectivity. Product managers may be seeking validation of the specific feature, and marketing may be seeking a push to launch the campaign. Without a trained stakeholder to synthesize data neutrally, democratization becomes a tool for validating pre-existing ideas.
Customer fragmentation
When sales, marketing, customer success, and product teams conduct their research without synthesising it, the customer profile becomes fragmented. The marketing team sees a bargain hunter, while the product team sees a premium user. This fragments the organization’s understanding of the customer.
The HIPPO effect
HIPPO stands for the Highest Paid Person’s Opinion. According to Avinash Kaushik, who coined the term, it suggests that when stakeholders need to decide without a strategic direction, they defer to the most powerful person in the room.
Another danger of democratized research is that stakeholders revert to team leaders or the product owner for decisions rather than relying on research.
Amazon’s Fire Phone is a good example of this in practice. Former members of the Fire Phone team described the development experience as building a phone for Jeff Bezos rather than the customer. The phone launched in 2014 and was discontinued within a year. The research existed.
What was missing was a synthesizer with enough authority to ask the one question the data couldn’t answer on its own: are we building this for the customer, or for the conviction of the most powerful person in the room?
Poor positioning
Another consequence of democratized research is poor positioning and messaging. Product marketers build positioning that reflects everyone’s research, which is disjointed. Positioning becomes a compromise document as it reflects every stakeholder’s insight.
The result of this is messaging that says everything but nothing. The result is a product with no clear differentiation, saying everything but meaning nothing. This is because it was built for stakeholders, not for anyone in the market.
What these dangers share is a common root cause: the absence of strategic research synthesis.
Research synthesis is the process of drawing across multiple data sources to identify repeated insights. It reveals a unified story and answers what customers are collectively saying across research. Research synthesis paints the bigger picture, revealing themes and patterns, and providing strategic direction.
A great example of synthesis is Spotify’s 2025 Wrapped highlights. The most anticipated highlight of the year is built on research synthesis. It was conducted by aggregating users’ listening history throughout the year to tell a personalized story.
Spotify’s engineering team built an algorithm to capture user data and identify patterns: most nostalgic days, most unusual listening day, biggest discovery day, and others. They used LLMs to analyze patterns but applied human review to stress-test and refine the output.
The result is not more data. It is a better story told from the data already there. That is synthesis.
Most SaaS organizations are not doing this. And the absence is not just an operational gap. It is a strategic one.
How to build synthesis into your organisation
Start by creating a centralised research repository
Many product teams have fragmented data living in different locations. This leads to misalignment in the absence of a central database. Research should be documented and stored correctly in a central database.
Before synthesis is possible, you need to take inventory. What did the win/loss interview say? What were the results from the last NPS survey? Document it accurately. The patterns you need are often revealed in the existing data.
Atomise your research
This concept, developed by Daniel Piddock, means breaking insights into basic components.
The components are categorized as experiments, facts, insights, and conclusions. What did the team do? What was gathered? What will the team do next? Atomizing research helps stakeholders categorize fragmented data and provides a unified understanding.
Run synthesis workshops
This workshop helps to align stakeholders and interpret existing data to develop insights. Kata Kaplan teaches three formats for research alignment workshops. They are the Interactive Gallery Walk, the Insight Discovery Session, and the Assumption Comparison Session.
The interactive gallery walk invites stakeholders into a room where research evidence is displayed as exhibits. They are invited to review and then discuss the central takeaways and patterns. An insight discovery session involves stakeholders reviewing research evidence to identify themes and connectors.
The assumption comparison starts with stakeholders discussing their assumptions, then comparing research insights to those assumptions. The main aim of these workshops is to review research and identify patterns.
Identify paradoxes
The instinct is to look for similarities. However, there is a danger in ignoring contradiction. If research reveals customers want integration with existing tools, while another says they want a single unified system, that may seem contradictory, but synthesis requires you to find the why rather than just going with either. Digging into the contradiction reveals what the customer actually needs.
AI as a tool for strategic synthesis
Strategic synthesis requires time. AI helps reduce manual effort, allowing stakeholders to focus on higher-value strategic work and reviewing output.
Using AI such as Claude, you can cluster feedback, identify themes, summarise findings, and translate insights. This can be done by implementing a three-step process: extraction, synthesis, and stress testing of results.
- Extraction: This is AI scanning through the data sources to identify patterns. For example, ‘why are we losing customers to a particular competitor?’ or ‘why are marketing-qualified leads not converting into sales-qualified leads? ‘
- Synthesis: The next step is for AI to identify patterns and themes in the collected data.
- Stress-testing: Don’t just accept the hypothesis AI has posited. Critique the response by generating counterarguments, for example, asking, ‘Where does this lack credibility?’ The real value of this is in iteration, not in single use. Creating this as a system allows AI to operate autonomously, saving you time and shifting your role from operator to an overseer.
Tools and principles are helpful, but synthesis is anchored on the questions you ask before you begin. In my current role as a product marketing manager at Dishpal AI, here are the three questions I ask before any synthesis.
1. Whose problem are we solving? Not the founder’s or a response to a competitor. It has to be a customer’s pain point.
2. Are we synthesising to validate an assumption or discover new insights? This helps to avoid the validation trap.
3. What are the contradictions in the data? This helps me to dig into the why instead of the obvious similarities.
Who owns strategic synthesis?
Strategic synthesis is the product marketing manager’s responsibility.
You sit at the intersection of product, sales, customer success, and marketing. You understand the customer more deeply than any team does. You are the one person in the organization with the vantage point to look across data sources and competing insights, and give strategic direction.
The PMM who owns synthesis owns the strategic narrative of the organisation. The question is not whether your organization needs someone to do this. It does. The question is whether you step into that role or leave it open for someone louder to define the narrative for you.
