The new PMM stack: How AI fits across research, positioning, and GTM

The new PMM stack:   How AI fits across research, positioning, and GTM

Most (if not all) PMMs are using AI. But arguably, only a few are actually transforming how they work with it.

Currently, AI is being utilized as a faster Google, a more effective Grammarly, or a content assistant.

But that’s not the opportunity that should be grasped.

The PMMs who figure out how to embed AI across research, positioning, and GTM are going to outpace everyone else. Not because they work harder, but because they operate on a completely different level of insight and speed.

Let’s break it down.

Research: From static slides to living intelligence

Most competitive intel decks are outdated the moment they’re finished.

Markets move too fast. Competitors pivot weekly. Messaging evolves daily.

Yet most teams still rely on:

That model is dead.

AI turns research from a point-in-time exercise into a continuous signal engine. Instead of manually gathering insights, you build systems that are always learning and improving.

Real case study: Competitive intel AI agent

One of the most powerful AI implementations I’ve built is a Competitive Intelligence Agent

Rather than a mere ChatGPT prompt, it was a multi-source intelligence system using Glean Enterprise designed to replicate how a top-tier PMM thinks at scale.

How it was built

Step 1: Data aggregation layer

We connected structured and unstructured data sources:

  • Review platforms (G2, Capterra, TrustRadius) 
  • Online communities (Reddit threads, niche forums) 
  • Gong call transcripts 
  • Analyst reports (Gartner, Forrester, IDC) 
  • Competitor web pages (scraped weekly for messaging changes) 

This created a centralized dataset of thousands of data points across voice-of-customer, competitor claims, and real sales conversations.

Step 2: AI processing layer

Using LLM workflows, the system:

  • Tagged recurring themes (e.g. “slow implementation”, “poor UX”, “hidden costs”) 
  • Clustered objections from sales calls 
  • Mapped competitor claims vs. actual customer sentiment 
  • Identified contradictions (what competitors say vs what customers experience) 

Step 3: Insight generation layer

Outputs were structured into:

  • Dynamic competitor profiles (updated weekly) 
  • Real-time battlecards 
  • Trigger alerts when messaging or pricing changed 

What it produced (real outputs)

  • “Top 5 weaknesses” per competitor based on real customer feedback 
  • Feature gap heatmaps vs. your product 
  • Objection frequency scoring (e.g. “pricing concerns mentioned in 37% of deals”) 
  • Messaging drift detection (when competitors shift positioning) 

And most importantly:

Trap-setting questions for sales

Instead of giving reps generic battlecards, we armed them with guided discovery:

  • “How important is real-time visibility vs delayed reporting?” 
  • “Have you experienced limitations scaling across teams?” 
  • “How long did your last implementation take?” 

These weren’t random.

They were derived directly from patterns across hundreds of data points.

Measurable impact

  • 30–40% reduction in sales ramp time (new reps had instant access to real insights) 
  • Higher deal control – reps led conversations instead of reacting 

Why this matters

This isn’t just better intel. It changes how you sell.

You’re no longer reacting to competitors. You’re guiding buyers into discovering their weaknesses themselves.

Positioning: AI can generate messaging, but it can’t feel it

AI can write positioning. It can generate value props. It can spin up messaging frameworks in seconds.

And most of it sounds… fine.

That’s the problem. “Fine” doesn’t win deals.

What AI is great at

  • Synthesizing large volumes of customer intel
  • Identifying patterns across personas and industries 
  • Generating multiple positioning angles quickly 

Where it falls short

AI doesn’t:

  • Sit in sales calls and feel tension 
  • Understand emotional triggers behind decisions 
  • Know when messaging lands vs just sounds good 

Real case study: Messaging iteration loop

Instead of treating positioning as a one-time exercise, we turned it into a live experimentation engine.

Step 1: AI-generated positioning angles

Using AI, we generated 5 distinct positioning narratives:

  1. Efficiency-led 
  2. Cost-saving 
  3. Risk reduction 
  4. AI innovation 
  5. Ease of use 

Each had:

  • Core value prop 
  • Supporting proof points 
  • Persona-specific variations 

Step 2: Structured testing framework

We didn’t debate internally. We tested in-market across multiple channels:

Sales:

  • SDRs and AEs each ran different messaging angles in discovery and demos 
  • Call transcripts were analyzed for engagement signals (talk time, follow-up questions, objections) 

Marketing:

  • Paid campaigns segmented by positioning angle 
  • CTR, conversion rates, and engagement are tracked per message 

Website:

  • Homepage variants rotated messaging themes 
  • Heatmaps and session recordings tracked behavior 

Step 3: AI-driven analysis

AI aggregated performance data across:

  • Sales conversations 
  • Ad performance 
  • Website engagement 

It identified:

  • Which message drove the highest conversion 
  • Which personas responded to which angle 
  • Where messaging broke down 

Results

  • 2.5x increase in demo conversion rates on the winning message 
  • 20% increase in pipeline velocity (faster movement through stages) 
  • Clear identification that “risk reduction” messaging outperformed “AI innovation” by a wide margin 

Key insight

The internal team initially believed “AI innovation” would win.

The market proved otherwise.

The takeaway

AI helps you explore the landscape. But only humans can decide:

  • What actually resonates emotionally? 
  • What creates urgency? 
  • What makes someone say, “This is exactly what we need”? 

Because positioning isn’t just words. It’s psychology.

GTM: From campaign execution to continuous optimization

Most GTM strategies still operate like this: define ICP; build messaging; launch campaigns; wait and see what happens.

It’s slow. It’s rigid. And it leaves too much on the table.

What AI changes

AI turns GTM into a real-time feedback loop.

Not quarterly optimization. Daily iteration.

Real case study: AI-Powered GTM engine

A B2B startup that I advise implemented a fully AI-driven GTM system that connected ICP discovery, outreach, and optimization into one continuous loop.

Step 1: ICP expansion and discovery

Using tools like Clay and Apollo, they moved beyond static ICP definitions.

They built dynamic ICP models based on:

  • Tech stack signals (what tools companies were using) 
  • Hiring trends (e.g. surge in specific roles) 
  • Growth indicators (funding rounds, expansion signals) 

AI then:

  • Identified lookalike companies 
  • Scored accounts based on likelihood to convert 
  • Continuously refreshed target lists 

Step 2: Messaging pressure testing

Instead of one campaign…

They launched multiple micro-campaigns simultaneously:

  • LinkedIn outbound sequences 
  • Cold email campaigns 
  • Landing pages tied to each persona + message 

Each variation tested:

  • Hook 
  • Pain point framing 
  • Value prop 

Step 3: Real-time optimization engine

AI analyzed:

  • Reply rates 
  • Positive vs negative responses 
  • Conversion to meetings 
  • Objection patterns 

It then:

  • Automatically deprioritized low-performing segments 
  • Highlighted high-converting ICP clusters 
  • Recommended messaging adjustments 

Results

  • 3x increase in qualified meetings 
  • 40% improvement in reply rates 
  • 25% reduction in cost per opportunity 
  • Discovery of a new high-converting ICP segment that the team hadn’t previously targeted 

Key shift

GTM stopped being campaign-based. It became system-based.

Always running. Always learning.

Why this matters

GTM is no longer about launching the perfect campaign.

It’s about launching fast…and learning faster.

The part everyone gets wrong: AI ≠ Replacement

Here’s the uncomfortable truth: AI will expose average PMMs.

Because it can already do: basic messaging, generic personas, and surface-level research 

So if that’s where you operate…you’re replaceable.

But here’s what AI can’t replace: strategic judgment, storytelling, and emotional intelligence.

People don’t buy software because of perfect feature lists, clean positioning frameworks, and AI-generated copy.

They buy because they feel understood, they trust the narrative, and they see themselves in the problem.

The human layer in the AI stack

The best PMMs will use AI to:

  • Get insights faster 
  • Test ideas at scale 
  • Eliminate manual work 

So they can spend more time on what actually matters:

  • Crafting narratives that connect
  • Enabling sales to tell better stories
  • Building trust with buyers

Because at the end of the day, people don’t buy from robots. They buy from people who understand them.

Final thoughts

The new PMM stack isn’t: “Use AI here and there.”

It’s: AI for signal, AI for speed, and AI for scale.

But always: Human for meaning.

PMMs who win won’t be the ones using AI the most. They’ll be the ones who know exactly where it should (and shouldn’t) be used.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

The Handoff Component: Why AI Output That Looks Great Still Gets Rewritten

Related Posts