In my final year of university, I was a Teaching Assistant for an introductory business course. First-year students were required to build a new venture from scratch. They had to validate the market, analyze competitors, define an ICP, and present a business plan.
I graded dozens of these reports. They were thoughtful. Thorough. Packed with research. And yet I found myself writing the same two words across the margins over and over again.
So what?
Students would spend pages explaining market trends or summarizing customer research. They clearly worked hard. But without explicitly stating what those findings meant for their strategy, the work lost its punch. The research sat there, disconnected from a decision.
That was the year I became a “so what” person.
“So what” meant this: What do we need to change in our strategy based on this information, or how has this confirmed that we are on the right track?
When I entered product marketing, I assumed everyone thought this way.
They do not.
Most PMM work stops one sentence too early
Over time, I started noticing a pattern. Persona updates would be shared after customer interviews. A new pain point would be identified. The document would be polished and circulated.
And then it would stop.
No clear articulation of what this meant for messaging. No recommendation on whether value propositions should shift. No proposal to test this pain point in ads or landing pages. No flag to product about whether roadmap priorities had become more or less relevant.
The research was strong. The implication was missing.
Campaign reports followed a similar pattern. Metrics would be compared to benchmarks. Open rates were good or bad. Conversion rates were up or down.
But why did we over- or underperform? Was the gap in open rate or in conversion? Did we book demos that never converted to pipeline? Was the ICP misaligned? Was the message off? Was there a broader market dynamic we ignored?
Most reports end at performance. Few end with a decision.
The same happens in leadership meetings. A feature launches. Adoption hits 32 percent. That number is shared confidently.
But 32 percent compared to what expectation? Does that level of adoption change retention, expansion, or usage depth? Does it justify additional investment? Does it signal that positioning needs refinement?
Most PMM work is thorough.
It just stops one sentence too early.
What “so what” really means in product marketing
Answering “so what” is not about adding a stronger summary slide. It is about owning interpretation.
A strong product marketer’s job is not to report what happened or just execute what other teams request. It is to form a clear perspective on what our ICP needs and ensure that perspective shapes business decisions.
If you go to your manager or your executive team with only the details, you are asking them to answer “so what” without the context you have.
You are closer to the inputs. You ran the interviews. You reviewed the win-loss calls. You built the messaging test. You understand the assumptions.
When you withhold the implication, even unintentionally, you leave leadership to connect the dots without your insight. That lowers the quality of the decision.
The bigger risk is silence, not imperfection.
Silence feels safe. It feels neutral. It feels like you are simply reporting facts. But neutrality often means your thinking never shapes the outcome.
The skills behind answering “so what”
Answering “so what” is not a personality trait. It is a discipline.
Moving beyond perfect experiments
If we are testing messaging, a clean A/B test across email, in-product prompts, or landing pages is the simplest way to isolate impact. But product marketing rarely operates in perfectly controlled environments.
Traffic volumes may be too low for statistical significance. Sales cycles may stretch across quarters. Multiple initiatives may be running at once. Attribution models may not capture nuance.
In those cases, the alternative is not to say nothing. The alternative is to build a case.
That means:
- Looking for a directional signal across multiple touchpoints
- Comparing performance across segments, not just aggregate numbers
- Pulling qualitative insight from sales calls or customer interviews
- Stating clearly what you believe is happening and why
You may not have airtight causality. But you can articulate a hypothesis, your confidence level, and what data would change your mind.
Waiting for perfect proof often means reacting too late.
Separating signal from noise
Not every metric deserves equal weight.
If a campaign overperformed on open rate but underperformed on booked pipeline, the “so what” is not that the subject line was strong. It starts with curiosity not translating into action. Then it’s making a suggestion as to why.
If adoption is high but expansion is flat, the “so what” is not that the feature is popular. It is looking into why it isn’t driving expansion (this might require customer calls/sales team insights) and what you can do next.
Answering “so what” requires asking which metric actually ties to the business objective we set at the beginning.
Without that anchor, interpretation becomes arbitrary.
Surfacing implications without self-filtering
One of the most important lessons I learned came after a price increase. We noticed upgrades slowing down. My interpretation was that sticker shock was creating friction. I proposed offering short-term coupons to test whether reducing that friction would accelerate upgrades.
When I was initially told we could not support that, I almost dropped it. I assumed the tradeoff had already been decided.
My manager pushed forward with product to build the capability anyway. The coupons performed well. They unlocked revenue we would have otherwise delayed.
My mistake was not forming the interpretation. It was prematurely filtering it.
Your job is not always to make the final tradeoff decision. Depending on your level, that may sit with your manager or with leadership. But your job is to surface the implication clearly.
If you self-censor before the idea reaches the right forum, you remove optionality from the business.
Making recommendations a habit
I believe you should almost always have a recommendation. If you are not used to providing one, that is normal. It is a muscle.
Each time you articulate a recommendation, even if your manager disagrees, you learn how to refine your thinking. You learn what factors you underweighted. You sharpen your judgment.
If you never take a stance, you never get that feedback loop.
Again, the bigger risk is silence, not imperfection.
How I show up for meetings with leadership
Any time I prepare for an executive meeting, I start with one question: What do I want them to take away?
From there, I built the data to support that takeaway.
In many cases, they will not ask to see every data point. But I am prepared if they do. Leadership is juggling far more than product marketing. They do not live in the details the way we do. Questions that seem obvious to you may not be obvious to them.
In Slack, I start with a TLDR. The takeaway comes first. The data lives in the thread.
I also explicitly call out assumptions and risks. Many decisions are made with incomplete data. Stating assumptions does two things. It shows you have considered the risk, and it invites others who may have additional context to weigh in.
KPIs are defined before execution begins. Success criteria are aligned early. That way, when results come in, the implication is clearer.
The structure is consistent.
Start with the so what. Then explain the why.
The discomfort of perspective
Taking a stance is uncomfortable.
Sometimes your interpretation highlights that a bet did not pay off. A new ICP did not convert. A feature launch did not resonate. A campaign underperformed.
It can feel like you are calling out mistakes.
I approach these moments neutrally. The goal is not to assign blame. It is to extract learning.
If a new ICP did not work, why? Was the pain point we were targeting weaker than expected? Was the message misaligned? Was the sales motion not ready? And what is the recommended next step?
When you frame conclusions as learning and forward motion, not criticism, you protect relationships while still elevating insight.
There is rarely a truly wrong interpretation of data. There is often missing context. And when you offer your perspective, others bring additional context forward.
That exchange sharpens the conclusion.
Silence sharpens nothing.
The cost of stopping at reporting
When PMMs stop at reporting, two things happen.
First, they limit their own growth. If you consistently show up with updates but no perspective, you train leadership to see you as an executor, not a strategic partner.
The PMM who gets asked for perspective is the one who continuously offered it before it was requested.
Second, the business suffers. Leaders make decisions every day. If your insight is buried in a document or never articulated, those decisions move forward without it.
If you have ever watched a leader make a call and immediately felt hesitation based on work you have done, ask yourself whether you clearly passed that insight upward.
You sit at the intersection of customer insight, product detail, and market feedback.
If you do not compile and interpret that signal, no one else will do it at the same depth.
From updates to influence
In university, writing “so what” in the margins was about grading.
In product marketing, it is about leadership.
Early in your career, you are rewarded for completeness. You are praised for polished decks, detailed personas, and thorough reports.
As you grow, you are valued for your perspective.
A strong product marketer’s job is not to report what happened or just execute what other teams request. It is to form a clear perspective on what our ICP needs and ensure that perspective shapes business decisions.
If you want to level up, start one sentence earlier.
Do not wait for someone to ask, “So what?”
Answer it first.
