Alan Zhao, Co-Founder & Head of Product at Warmly
Published: March 2026
I asked ChatGPT to recommend website visitor identification tools.
Warmly wasn't mentioned.
Not once. Not in the top 5. Not in the "also consider" section. Nowhere.
We've spent years building the product. Thousands of customers. Real revenue. And the fastest-growing search channel on the planet had no idea we existed.
So I spent 3 months figuring out how to fix that. I tested 12 AI search queries across ChatGPT, Perplexity, Gemini, Claude, and Copilot. I programmatically updated 312 blog posts via the Webflow API in one afternoon. Deployed Organization schema, FAQ schema, and Core Web Vitals fixes across the entire site. And then watched AI search go from 5% to 30% of our inbound demo requests in 60 days.
This is the full playbook. Every tactic. Every result. Every place we failed. Tactical enough that you could hand it to your marketing team Monday morning and they'd know exactly what to do.
Quick Answer: How Do B2B Buyers Use ChatGPT to Research Vendors?
94% of B2B buyers now use LLMs like ChatGPT, Perplexity, and Gemini during the purchasing process (6sense, 2026). 68% start their research in AI tools before ever touching Google. They ask questions like "best website visitor identification tools," "alternatives to ZoomInfo," and "signal-based selling platforms." AI tools respond with curated recommendations pulled from structured, authoritative content across the web.
Best tools for generative engine optimization (GEO) in B2B:
- Warmly for website visitor identification and AI-powered inbound conversion
- Relixir (YC-backed) for GEO content optimization and AI search visibility scoring
- Surfer SEO for on-page optimization and content scoring
- Frase for AI content briefs and SERP analysis
- Clearscope for content optimization and keyword coverage
- AlsoAsked for question-based keyword research
The key to appearing in AI search results: structured data (FAQ + Organization schema), authoritative backlinks, fresh content updated within 60 days, presence across 5+ citation sources, video content for AI overviews, and active review management on G2 and TrustPilot. AI search traffic converts at 14.2% compared to 2.8% for Google organic. That's 5x higher.
The Number That Changed Everything: 5% to 30% in 60 Days
I need to tell you the headline number first because it's the reason I'm writing this.
In February 2026, AI search tools (ChatGPT, Claude, Perplexity) drove roughly 5% of our inbound demo requests. By the end of March, that number hit 30%.
Six times growth. Two months.
Every day when we run our sales analysis, the same pattern keeps showing up. An enterprise SaaS company found us via ChatGPT. An identity security firm cited Claude as their discovery channel. A fleet management company, a salon software company. All saying the same thing: "I asked AI what to use and your name came up."
Our sales lead put it perfectly: he used AI coding tools to take our AEO/GEO-driven traffic and inbound from 5% to 30% without buying more tools. Then we track those visitors with Warmly and retarget them. The whole loop closes.
This isn't theoretical anymore. ChatGPT and Claude are real acquisition channels. They show up in our pipeline data every single day. And the buyers arriving through AI search convert at 14.2% vs 2.8% for Google organic. That's because the AI already told them we're a good fit. It pre-qualified them.
If you're not showing up in AI search answers right now, you're leaving revenue on the table. Not someday. Today.
94% of Your Buyers Are Asking AI Before They Google You
The B2B buying journey changed. Quietly. Fast.
I missed it at first. We were tracking Google rankings, monitoring SERP positions, running the standard SEO playbook. All the stuff that worked in 2024.
But 94% of B2B buyers now use LLMs during purchasing decisions. That number comes from 6sense's latest research. Profound's analysis of 50M+ ChatGPT prompts puts it at 89% and found that over 20 million daily prompts involve B2B decisions.
68% start in AI tools before they ever open Google.
And here's what most people miss: 37.5% of ChatGPT usage is "generative intent." That's a behavior category that doesn't even exist in Google search. Users aren't just searching. They're asking AI to draft vendor comparisons, build shortlists, create evaluation frameworks. The shift isn't from Google to ChatGPT. It's from "discoverability" to "recommendability." Being a ranked URL isn't enough. You need to be a cited source.
Think about that. Your buyer opens ChatGPT or Perplexity, types "best visitor identification tools for B2B SaaS," and gets a curated answer. If you're not in that answer, you don't exist in the first two-thirds of their research process.
This isn't a "nice to have" trend to watch. This is a fundamental shift in how B2B software gets discovered.
And the conversion data backs it up. AI search traffic converts at 14.2% versus 2.8% for traditional Google organic. That's 5x higher conversion. Why? Because buyers coming from AI search are further along in their decision process. They've already been told you're a good fit. The AI pre-qualified them for you.
In the new AI world. Outcomes or it doesn't count.
The outcome here is clear: if you're invisible in AI search, you're losing deals you never even knew about.
I Asked 5 AI Tools to Recommend Visitor ID Software. Here's What Happened.
I ran an experiment. Twelve queries. Five AI search engines. Real queries that actual B2B buyers type.
The queries:
- "Best website visitor identification tools 2026"
- "Warmly vs 6sense"
- "Best intent data platforms"
- "Signal-based selling tools"
- "Best alternatives to ZoomInfo"
- "AI SDR tools"
- "B2B website visitor tracking software"
- "Anonymous website visitor identification"
- "Best demand generation tools"
- "Revenue intelligence platforms"
- "Visitor identification software comparison"
- "How to identify anonymous website visitors"
The Results
Where Warmly showed up (dominant):
- "Warmly vs 6sense" - every engine cited us
- "Best website visitor identification tools 2026" - appeared in 4/5 engines
Where Warmly was completely invisible:
- "Signal-based selling" - zero mentions
- "Best intent data platforms" - zero mentions
- "Best alternatives to ZoomInfo" - zero mentions
- "AI SDR tools" - zero mentions
- "Best demand generation tools" - zero mentions
- "Revenue intelligence platforms" - zero mentions
Warmly was cited in only 6 of 12 queries. Half. We were invisible for half the queries our buyers actually ask.
That hurt. But it was the wake-up call we needed.
Why Some Queries Worked and Others Didn't
The pattern was obvious once I saw it.
We showed up when we had dedicated, structured content that directly answered the query. Our comparison pages worked. Our "best visitor ID tools" content worked because we'd built it specifically for that keyword cluster.
We were invisible for everything else. "Signal-based selling" is literally what we do. But we had no content structured around that phrase. No FAQ schema. No comparison tables. Nothing for the AI to grab onto. The same was true for "AI SDR tools," "intent data platforms," and "alternatives to ZoomInfo."
The AI isn't biased against you. It just can't find you.
What Gets You Cited in AI Recommendations
I spent weeks digging into how ChatGPT, Perplexity, and Gemini actually select sources. The mechanics are different from Google. And the details matter.
1. Source Authority Matters More Than Keywords
ChatGPT and Perplexity don't work like Google. They don't just match keywords. They evaluate source authority based on citation networks.
ChatGPT's citation patterns:
- Wikipedia is the #1 source (47.9% of citations)
- Referring domains weight approximately 30% of authority scoring
- Pages with presence across 5+ authoritative sources have 60-80% higher citation rates
Perplexity's citation patterns:
- Reddit is the #1 source (46.7% of citations)
- Content freshness carries a 40% weight in ranking
- Real user discussions and reviews heavily influence results
What this means: you can't just publish a blog post and hope. You need distributed authority. Your content needs to be referenced, discussed, and linked from multiple authoritative sources.
Brands with presence across 5+ authoritative sources see 60-80% higher citation rates. That's not marginal. That's the difference between being recommended and being invisible.
2. Freshness Is Non-Negotiable
Pages updated within 60 days are 1.9x more likely to appear in AI citations.
This one changed everything for us. We had great content from 2024 that was just... old. The information was still accurate. But the AI engines treated it as stale.
I programmatically updated 312 blog posts via the Webflow API in one afternoon. Not manually. I wrote a script that refreshed dates, updated stats, added new sections, and deployed FAQ schema across every post. More on the technical details later.
Freshness is a signal of trust for AI engines. If you haven't touched your content in 6 months, you're basically invisible.
3. Structure Your Content for Chunk-Level Extraction
44.2% of all LLM citations come from the first 30% of text content.
AI engines don't read your whole 4,000-word blog post the way humans do. They break it into passages (chunks) and evaluate each one independently as a potential citation. Every section of your content needs to work as a standalone citable snippet. If a chunk doesn't make sense without the rest of the article, it won't get cited.
This is what Profound calls "chunk-level retrieval optimization" and it's the single most important content structure concept for AI search.
The structure that wins:
- 30-60 word direct answer leading every section (the "atomic paragraph")
- Quick Answer blocks at the top of every post
- FAQ sections with clear question-and-answer format
- Comparison tables with specific data (not vague descriptions)
- Numbered lists with concrete recommendations
- Bold key phrases that AI can easily extract
Two more data points that should change how you write. Pages over 20,000 characters get 4x more citations than shorter pages (10.18 vs 2.39 average citations). And HowTo schema delivers the largest citation boost of any structured data type, bigger than FAQ schema. If your content is instructional, HowTo schema is the move.
Structured data plus FAQ blocks produce a 44% increase in AI search citations. That's one of the highest-ROI changes you can make.
4. Entity Optimization Goes Way Beyond FAQ Schema
This is where most GEO guides stop at "add FAQ schema." That's table stakes. Real entity optimization means building a complete machine-readable identity for your brand.
Organization Schema. Not just Article schema. Full Organization schema with your founders, social profiles, founding date, and aggregate ratings. We deployed Organization schema with our 4.8/5 aggregate rating across 200+ reviews. This gives AI engines a structured "card" for your company that they can reference in answers.
Knowledge Graph consistency. Your company name, description, category, and key attributes need to be identical across your website, G2, LinkedIn, Crunchbase, Wikipedia (if applicable), and every other source. AI engines cross-reference these. Inconsistencies lower confidence scores.
Entity density in content. Content with 15+ connected entities has a 4.8x higher selection probability in AI citation. "Connected entities" means named tools, companies, people, concepts, and categories that are semantically linked. When your content mentions Warmly, 6sense, ZoomInfo, Clearbit, visitor identification, intent data, signal-based selling, and B2B buying process all in the same piece, the AI recognizes it as comprehensive.
Thin content that only mentions your own product? Low entity density. Low citation probability.
5. Reviews Directly Show Up in AI Search Answers
This one caught us off guard.
Our CEO ran an experiment. He asked several AI tools to tell him negative things about Warmly. One of them surfaced a bad G2 review. Word for word. Sitting right there in the AI's answer about our product.
He spent two months tracking down that reviewer. Got them on a call. They'd had a legitimate issue that had since been fixed. They updated the review.
The lesson: negative reviews on G2 and TrustPilot don't just affect your G2 profile. They show up in AI search answers about your brand. When a buyer asks Claude or ChatGPT "what are the downsides of [your product]," it pulls from those review platforms.
This means review management is now an AEO strategy. Not just a customer success task.
What to do:
- Audit what AI tools say about you. Ask ChatGPT, Claude, and Perplexity: "What are the negatives of [your company]?" and "What do users complain about with [your product]?" Document every source they cite
- Address the reviews they surface. Not by gaming them. By actually fixing the issues and asking reviewers to update
- Build review volume on platforms AI engines trust. G2, TrustPilot, Capterra. We're actively signing up for TrustPilot specifically because it helps with non-SaaS product search visibility in ChatGPT
- Recency matters. A flood of positive reviews from 2024 matters less than 10 recent ones from 2026. Keep the review pipeline active
6. Video Is Capturing Spots in AI Search
This is the emerging frontier most people haven't caught yet.
Video is capturing spots in AI search on Google, both in AI Overviews and in traditional results. And it feeds into ChatGPT too, since ChatGPT pulls from web search results that increasingly include video.
Google AI Overviews now show video carousels for certain queries. If you have a YouTube video answering "how to identify anonymous website visitors," it can show up in the AI Overview for that query. That's a visibility spot your text-only competitors can't touch.
What this means for your strategy:
- Create video versions of your highest-performing blog posts. Not fancy production. Screen recordings, founder walkthroughs, product demos
- Optimize video titles and descriptions with the same keywords you target in blog content
- Host on YouTube (Google owns it, so it gets preferential treatment in AI Overviews)
- Embed videos in your blog posts. This increases time on page (a freshness/quality signal) and gives the page two chances to appear in AI results
We're not fully executing on video yet. That's an honest gap. But the data is clear enough that it's in our Q2 plan.
7. Only 11% of Domains Get Cited by Both ChatGPT AND Perplexity
This stat blew my mind. Only 11% of domains are cited by both ChatGPT and Perplexity.
Each AI engine has different citation preferences, different source weightings, different freshness requirements. Optimizing for one doesn't automatically mean you show up in the other.
You need to think about cross-platform AI visibility. Not just "how do I rank on ChatGPT" but "how do I show up everywhere buyers are asking questions."
8. Reddit and Wikipedia Are Your Backdoor
ChatGPT pulls heavily from Wikipedia (47.9%). Perplexity pulls heavily from Reddit (46.7%).
If your brand is mentioned positively in Reddit discussions and your Wikipedia presence is solid, you get indirect citation benefits even when the AI isn't pulling directly from your site.
This isn't about gaming Reddit or editing Wikipedia. It's about building a product good enough that real users talk about it in those places. And then making sure you have content that aligns with what people are saying.
9. Schema Markup Is MCP for Search
JSON-LD structured data is essentially how you give AI engines a machine-readable version of your content. FAQ schema, Article schema, Organization schema, Product schema, HowTo schema.
Think of it as giving your GTM brain its own decision-making framework, but for search engines.
Pages with proper schema markup see measurably higher AI citation rates. It's not magic. It's just making your content easier for machines to understand.
10. Backlinks Are the Foundation of AI Citations
I want to be specific about why backlinks matter differently for AI search than for Google.
Google uses backlinks as one of hundreds of ranking signals. AI engines use backlinks as a primary trust signal because referring domain authority directly correlates with how confidently the model cites a source.
ChatGPT weighs referring domains at approximately 30% of its authority scoring. That's massive. If your competitor has 500 referring domains and you have 50, they're getting cited and you're not. Full stop.
How to reverse-engineer competitor backlinks for AI search:
- Use Ahrefs or SEMrush to pull your competitors' top referring domains
- Filter for domains that AI engines trust. Industry publications, .edu sites, government sites, Wikipedia references, major media
- Look at which specific pages get the most backlinks. Those are the pages AI engines are most likely to cite
- Build content that earns links from the same sources. Original research, data studies, and controversial takes earn links. Generic "ultimate guides" don't
Our target is 1-2 new backlinks received per week for our top cited pages. That's the velocity needed to maintain and grow AI search visibility.
What We Changed at Warmly (And the Results)
I'm going to be very specific here. Not "we optimized our content." Exact changes, exact technical details, exact outcomes.
Before: The Problems
- No FAQ schema on any of our 312 blog posts
- No Organization schema anywhere on the site
- No Quick Answer blocks
- Comparison pages existed but lacked structured data
- Most content hadn't been updated in 4-6 months
- Zero content targeting "signal-based selling" or "AI SDR" keywords
- No structured pricing data in comparison posts
- Gen 1 solution pages had no schema, no FAQ, no "Ask AI" links
- Core Web Vitals were failing: CLS at 0.14 (needs to be under 0.1), LCP at 2.7 seconds (needs to be under 2.5s)
- 270 images on the homepage needed compression
- Google uses mobile-first indexing, so our CWV problems were dragging down every single page
The Changes
1. Programmatic FAQ Schema Deployment (312 Posts via Webflow API)
I didn't manually add FAQ schema to 312 blog posts. That would take weeks. Instead, I wrote a script that hit the Webflow API, iterated through every blog post, generated relevant FAQ questions and answers for each one, and deployed JSON-LD FAQ schema into the head tag. All 312 posts. One afternoon.
This is the difference between "we should add FAQ schema" and actually doing it at scale. If you have more than 50 blog posts, you need a programmatic approach. Manual doesn't scale.
2. Organization Schema with Real Data
We deployed full Organization schema with:
- Founders listed as key people with social profile links
- Aggregate rating: 4.8 out of 5 based on 200+ reviews
- Social profiles (LinkedIn, Twitter, YouTube)
- Founding date, headquarters, company description
This gives AI engines a structured entity card for Warmly. When someone asks "tell me about Warmly," the AI can pull from this structured data instead of trying to piece together information from random web pages.
3. Core Web Vitals Fixes
Google uses CWV as a ranking signal. And since SERP rankings feed AI search results, bad CWV hurts your AI visibility too.
What we fixed:
- CLS (Cumulative Layout Shift): Was 0.14, needed under 0.1. Fixed image dimensions, lazy loading, and font display swap
- LCP (Largest Contentful Paint): Was 2.7s, needed under 2.5s. Compressed hero images, implemented CDN caching, deferred non-critical JavaScript
- 270 homepage images: Compressed, converted to WebP, implemented responsive sizing
Google's mobile-first indexing means these performance problems affect every page on your site. Fix CWV once and every page benefits.
4. Quick Answer Blocks on Every Post
First 500 words now include a structured "Quick Answer" that directly answers the title question. Bold key recommendations. Specific numbers. "Best X for Y" format.
5. Mass Content Refresh
Updated every blog post we had. Fresh dates. Updated stats. New competitor pricing. Added sections for 2026 trends. This alone moved the needle on Perplexity, where freshness carries a 40% weight.
6. Comparison Tables with Real Pricing
Not "contact sales" or "custom pricing." Actual numbers. Transparent pricing that AI engines can extract and cite. This matters because AI tools love concrete data.
7. Gen 2 Solution Pages
Our new pages have full schema, FAQ blocks, "Ask AI" links, and structured data throughout. Our Gen 1 pages have nothing. The difference in AI citation performance is massive.
8. New Content for Missing Queries
We wrote dedicated content for every query where we were invisible. AI marketing agents. AI marketing automation. GTM tools. AI outbound sales tools. AI sales agents. Data enrichment tools. Apollo pricing. 6sense pricing. Clay pricing. Signal-based selling. Intent data alternatives.
9. TrustPilot for AEO Visibility
We signed up for TrustPilot specifically for AI search visibility. G2 covers the SaaS buyer audience. But TrustPilot helps with broader product search and reviews in ChatGPT. If a buyer asks "reviews of [your product]" in an AI tool, TrustPilot reviews show up alongside G2.
The Results
After implementing these changes:
- AI search went from 5% to 30% of inbound demo requests between February and March 2026
- Enterprise SaaS companies, identity security firms, fleet management companies, and salon software companies all cited AI tools as their discovery channel
- Warmly now shows up on Perplexity for key queries where we were previously invisible
- That traffic converts at 14.2% vs 2.8% for Google organic
- Our demand generation efforts now account for AI discovery as a primary channel
- We went from invisible on "signal-based selling" to being cited consistently
- Every daily sales analysis now includes ChatGPT and Claude as real acquisition channels
But I want to be honest. We're still not where we need to be. We're still invisible for "best alternatives to ZoomInfo" and "best intent data platforms." Those are high-volume, high-intent queries. Fixing them is our Q2 priority. And our CWV scores, while improved, still need work. CLS is borderline. We have more images to compress.
Context is the moat. And right now, we're still building that moat.
The AI-Powered SEO Operations Workflow
I want to show you how we actually run SEO operations now, because it's fundamentally different from how most teams do it. We use AI tools to orchestrate the entire workflow.
Here's the system:
Sybill (call recording AI) captures every sales and customer call. It extracts the questions prospects ask, the objections they raise, and the language they use. This feeds our content idea pipeline. When 5 prospects in a month ask "how does signal-based selling actually work," that becomes a blog post.
Webflow API gives us programmatic access to our entire blog catalog. We can audit every post, check which ones have schema markup, identify stale content, and deploy updates at scale. Not clicking through a CMS. API calls.
Google Search Console shows us which queries we're ranking for, which are declining, and where we have impression-rich but click-poor opportunities. These are the queries where better content structure could capture AI citations.
Google Analytics tells us which pages drive conversions and which AI search referrals are performing.
Warmly's own database shows us which ICP-fit companies are visiting specific blog posts. If a page gets traffic but no ICP visits, it's attracting the wrong audience. If it gets ICP visits but no conversions, the content or CTA needs work.
SE Ranking provides keyword volume, competition scores, and SERP feature data. This helps us prioritize which keywords to target with new content.
Google Ads Keyword Planner validates search appetite for new topics before we invest in writing them.
The whole thing is orchestrated with AI coding tools. We write scripts that pull data from all these sources, cross-reference them, and generate prioritized content briefs. A single person can manage the SEO operation that used to require a team of 3-4.
This is the real unlock. It's not just "use AI to write blog posts." It's using AI to run the entire content intelligence operation. Identifying what to write, how to structure it, when to update it, and how to measure whether it's working.
Our content targets:
- 5 new blog posts that rank per week
- 1-2 new backlinks received for top cited pages per week
- Updated blog posts for any pages dropping in rank
- Every post SEO/GEO/AEO optimized before publishing
That's content velocity that feeds topical authority. And topical authority is what AI engines use to decide which brand is the expert in a category.
The GEO Playbook for B2B Marketers
You don't need 3 months. I'm giving you the compressed version. Ten steps. Do them in order.
Step 1: Audit Your AI Search Visibility (Including Brand Sentiment)
Go to ChatGPT, Perplexity, Gemini, Claude, and Copilot. Run 10-15 queries your buyers would actually ask. Track where you show up and where you don't.
Be specific. "Best [your category] tools 2026." "[Your product] vs [competitor]." "[Your category] alternatives." "How to [problem you solve]."
But don't stop at category queries. Ask AI tools what's wrong with your product. "What are the downsides of [your company]?" "What do users complain about with [your product]?" Document every negative thing the AI surfaces and trace it back to its source. That G2 review from 2023? The Reddit thread from a frustrated user? Those are now showing up in AI answers about your brand.
Document everything. The gaps are your roadmap. The negatives are your fires to put out.
Step 2: Deploy Schema Markup at Scale
FAQ schema is the starting point, not the finish line.
Deploy these schema types across your site:
- FAQ Schema on every blog post and solution page (8-20 questions each)
- Organization Schema with founders, social profiles, aggregate rating, founding date
- Article Schema on every blog post with author, publish date, modified date
- Product Schema on your product/pricing pages with features and pricing
If you have more than 50 pages, do this programmatically. Use your CMS's API. We updated 312 posts in one afternoon via the Webflow API. Manual schema deployment doesn't scale.
Step 3: Add Quick Answer Blocks to Every Page
Within the first 500 words of every important page, add a structured Quick Answer. Direct answer to the page title. Bold key recommendations. Specific numbers.
AI engines scan the top of your content first. 44.2% of all LLM citations come from the first 30% of text. Put your best stuff up top.
Step 4: Fix Your Core Web Vitals
CWV affects your Google rankings. Google rankings feed AI search results. Bad CWV is a hidden drag on your AI visibility.
Check your CWV in Google Search Console or PageSpeed Insights:
- CLS (Cumulative Layout Shift): Needs to be under 0.1
- LCP (Largest Contentful Paint): Needs to be under 2.5 seconds
- INP (Interaction to Next Paint): Needs to be under 200ms
Common fixes: compress images, add explicit width/height dimensions, implement lazy loading, defer non-critical JavaScript, use a CDN. Google uses mobile-first indexing, so test on mobile specifically.
Step 5: Refresh Everything Within 60 Days
Go through every piece of content. Update dates, statistics, pricing, tool lists, and screenshots. Pages updated within 60 days are 1.9x more likely to appear in AI citations.
I did 312 posts in one afternoon via API. You can batch this. It doesn't need to be a full rewrite. Update the stats, add a 2026 section, refresh the intro.
Step 6: Build Entity-Dense Content
Every important page should mention 15+ connected entities. Competitors, tools, concepts, categories, use cases, personas.
Don't just write about your product. Write about the ecosystem. AI lead scoring in the context of lead generation metrics. Visitor identification in the context of demand creation vs. demand capture.
Content with 15+ connected entities has 4.8x higher selection probability. That's not a small edge. That's a category advantage.
Step 7: Manage Your Reviews as an AEO Strategy
This isn't just customer success work anymore. It's AI search optimization.
- Audit what AI tools say about your brand (both positive and negative)
- Respond to and resolve negative reviews on G2, TrustPilot, and Capterra. Not by gaming. By fixing issues and asking satisfied customers to share their experience
- Build review volume. AI engines cite platforms with more reviews more confidently
- Consider platforms beyond G2. TrustPilot helps with broader AI search visibility. Capterra covers a different buyer persona. The more platforms you're reviewed on, the more citation sources AI engines can pull from
Step 8: Create Video Content for AI Search
Google AI Overviews now include video carousels. ChatGPT pulls from web search results that include video. YouTube videos rank in AI answers.
Start with your top 10 performing blog posts. Create video versions. They don't need to be polished. Screen recordings, founder walkthroughs, product demos. Publish on YouTube with optimized titles and descriptions. Embed in the original blog post.
Two visibility spots for the price of one.
Step 9: Distribute Across Authoritative Sources
Your blog alone isn't enough. You need presence across 5+ authoritative sources for 60-80% higher citation rates.
- Reddit: Participate genuinely in relevant subreddits (r/sales, r/SaaS, r/marketing)
- Industry publications: Guest posts, contributed articles, original research
- Review sites: G2, TrustPilot, TrustRadius, Capterra (with detailed, recent reviews)
- YouTube: Video content that covers the same topics as your blog posts
- LinkedIn: B2B influencer marketing and thought leadership posts
- Partner content: Co-created content with complementary tools
Vercel reported that ChatGPT now refers approximately 10% of new user signups, up from 1% six months ago. That's the trajectory. AI search is becoming a primary acquisition channel.
Step 10: Build a Measurement Framework (Because "Results Are Random")
I learned something important from an SEO agency we spoke with: there's no reliable way to measure AEO directly because AI search results are different every time you query. The same question returns different sources, different recommendations, different citations. There's no stable "ranking" to track.
The data backs this up. Profound's research on AI search volatility found that citation drift runs 40-60% monthly. Meaning: 54% of domains cited by ChatGPT this month weren't cited last month for the same query. Google AI Overviews is even worse at 59%. Over six months, drift balloons to 70-90%. You need 60-100 repeated queries per prompt to get statistically meaningful data. One-time audits are useless.
But you can still measure. Here's how:
Proxy metrics (leading indicators):
- SERP rankings for target keywords. SERP powers AEO. If you do well in SEO, the AI search visibility should follow
- Schema markup coverage across your site
- Core Web Vitals scores
- Content freshness (% of pages updated in last 60 days)
- Review volume and sentiment on G2/TrustPilot
Direct metrics (lagging indicators):
- Referral traffic from chat.openai.com, perplexity.ai, claude.ai, gemini.google.com
- Conversion rate of AI search traffic vs other channels
- % of demo requests that cite AI tools as discovery channel (ask in your intake form)
- Manual AI audit: ask AI tools about your category monthly and track mention frequency
The weekly check:
Run your top 5 target queries in ChatGPT, Perplexity, and Claude every Monday. Document whether you appear. Screenshot it. Over 4-8 weeks, you'll see patterns even if individual results vary.
At Warmly, the most reliable metric has been self-reported attribution on our demo request form. When buyers tell us "I found you on ChatGPT," that's the ground truth. And it went from 5% to 30% in two months.
Check out our GTM strategy and planning guide for how to build AI search visibility into your broader go-to-market motion.
Content Velocity: Why Publishing 5 Posts Per Week Matters
I want to address something that most GEO guides skip: volume.
Topical authority is how AI engines decide which brand is the expert in a category. It's not about one killer blog post. It's about having 50, 100, 200 pieces of content that collectively cover every angle of your space.
When ChatGPT gets asked "best visitor identification tools," it doesn't just look at one page. It evaluates your entire domain's coverage of that topic. How many pages mention visitor identification? How many related subtopics do you cover? How fresh is the content? How interconnected are the pages?
That's why our target is 5 new blog posts that rank per week. Not 5 mediocre posts. 5 posts that are SEO/GEO/AEO optimized, entity-dense, schema-marked-up, and targeting specific keyword clusters.
Here's how we pick what to write:
1. Sales call analysis (via Sybill): What questions are prospects asking this week?
2. Search Console data: Where do we have impressions but low clicks?
3. AI audit results: Which queries are we invisible for?
4. SE Ranking data: What's the search volume and competition for potential topics?
5. Warmly visitor data: Which ICP companies are visiting which blog posts?
Every post gets the full treatment: Quick Answer block, FAQ schema, 15+ entities, internal links to related content. No thin content. No filler.
The compound effect is real. After 3 months of this velocity, we have enough content to cover our entire category from multiple angles. AI engines start treating us as a topical authority. And that authority compounds into more citations across more queries.
What This Means for Intent Data and Visitor Identification
If you're in the intent data or visitor identification space, this shift has specific implications.
Your buyers are asking AI tools which intent data platform to use. They're asking which visitor identification tool is best for their company size, their tech stack, their budget. And if you're not showing up in those answers, your competitors are.
Think about it from the buyer's perspective. They open Perplexity and type "best website visitor identification tools for B2B SaaS companies under 500 employees." The AI gives them 5 recommendations with pros, cons, and pricing. They click through to 2-3 of those. They never Google the other 15 tools that exist.
This is demand creation vs. demand capture in its newest form. AI search is creating demand for the tools it recommends and capturing it simultaneously.
At Warmly, we're building our AI-powered inbound agent to work with this new reality. When someone lands on our site from an AI search referral, they've already been pre-qualified by the AI's recommendation. Our job is to convert that high-intent visit into a conversation.
And the conversion data proves it works. 14.2% conversion from AI search versus 2.8% from Google organic. AI search visitors are 5x more likely to convert because they arrive with context. They already know what you do. They already believe you might be a fit.
The companies that figure out AI marketing automation and agentic AI for this new search landscape will dominate the next 3-5 years of B2B SaaS. The ones that keep optimizing only for Google will slowly become invisible.
Explore our resources and playbooks for more on building AI-native GTM motions.
GEO/AEO Tool Comparison: What to Use and What It Costs
Here's the honest comparison of every tool we've evaluated for AI search optimization. Some we use. Some we tested and dropped.
| Tool |
What It Does |
Best For |
Pricing |
Our Take |
| Profound |
AI answer engine monitoring, citation tracking, prompt volume data |
Measuring AI visibility at scale, tracking citation drift |
Custom (enterprise) |
Best data on AI search. Their research on 50M+ ChatGPT prompts is unmatched. Worth it if AI search is a primary channel |
| Relixir (YC) |
GEO content optimization, AI search visibility scoring |
Optimizing existing content for AI citations |
Custom (startup-friendly) |
We use this. Their insight that "if you optimize well for SEO, GEO usually benefits" matches our data |
| Surfer SEO |
On-page SEO optimization, content scoring |
Ensuring content hits SEO fundamentals before layering GEO |
$89-$219/mo |
Solid for SEO baseline. Doesn't specifically optimize for AI search |
| Frase |
AI content briefs, SERP analysis, question research |
Finding the questions AI tools are being asked |
$15-$115/mo |
Good for research phase. We use it for content briefs |
| Clearscope |
Content optimization, keyword coverage |
Ensuring topical completeness (entity density) |
$170+/mo |
Premium but effective for entity-dense content |
| AlsoAsked |
Question-based keyword research, PAA mapping |
Finding FAQ schema questions that match AI prompts |
Free-$47/mo |
Essential for FAQ research. Maps the exact questions people ask |
| G2 |
Software reviews, buyer intent |
AEO visibility. Reviews show up directly in AI answers |
Free to claim |
Non-negotiable. G2 reviews appear word-for-word in ChatGPT answers about your brand |
| TrustPilot |
Broader product reviews |
AEO for non-SaaS searches and ChatGPT visibility |
Custom |
We just signed up specifically for AI search visibility |
| Ahrefs |
Backlink analysis, keyword research |
Reverse-engineering competitor backlinks that drive AI citations |
$99-$449/mo |
Backlinks = AI trust signals. Ahrefs shows you where to build them |
The stack we actually run: Relixir for GEO optimization + G2/TrustPilot for review-based AEO + Ahrefs for backlink strategy + AlsoAsked for FAQ research + our own AI-powered workflow (Sybill + GSC + GA + Warmly DB) for content intelligence. Total cost: roughly $500-700/month plus the tools we already had.
You don't need all of these. Start with G2 (free), AlsoAsked (free tier), and Google Search Console (free). Add Relixir or Profound when AI search becomes 10%+ of your inbound.
The Competitive Landscape Is Wide Open
I looked at what our competitors are doing with GEO. The answer is basically nothing.
6sense has one blog post about LLM buyer behavior. One. That's it.
Zero competitors have published a practical "how to optimize for AI search" guide. Nobody has shared their own data. Nobody has been transparent about where they're failing.
This is the biggest whitespace in the entire competitive landscape right now. The company that owns the "generative engine optimization for B2B" narrative will have a massive advantage as AI search grows from 10% to 50% of B2B research traffic.
Qualified is doing something interesting. They've published original research reports, which is a strong GEO move because AI engines love citing original data. But they haven't connected it to a practical playbook. And we've written about what makes us different from Qualified in our comparison page.
We're betting that transparency wins. Showing our actual results, including the failures, builds more trust than a polished case study ever could.
Should You Stop Investing in SEO?
No. Absolutely not.
AI search engines use Google and Bing under the hood. When someone asks ChatGPT a question, it often runs web searches in the background and synthesizes results. If you win at SEO, you're more likely to win at AEO and GEO too.
I learned this from an agency we consulted: there's no actual way to measure AEO because results are random every time you search. But SERP powers AEO. Do well in SEO and the other should follow.
Think of it as layers:
- SEO gets you indexed and ranked
- AEO gets you cited in answer boxes and AI overviews
- GEO gets you recommended in AI-generated responses
They're complementary, not competing. The companies that win will do all three.
What you should stop doing is treating SEO as the only game. Add GEO to your GTM toolkit. Add it to your content calendar. Measure it.
The GEO Tool Stack
Here's what we actually use. Not theoretical recommendations. The tools running in our stack right now.
For GEO Content Optimization:
- Relixir (YC-backed): GEO-specific content optimization. Their data shows longer-form content (around 2,000 words) tends to perform better for AI citations. We use it to score content before publishing
- Surfer SEO: On-page optimization and content scoring for traditional SEO (which feeds GEO)
- Frase: AI content briefs and SERP analysis
For Technical SEO/AEO:
- Google Search Console: Keyword performance, CWV monitoring, indexing status
- Google PageSpeed Insights: Core Web Vitals diagnostics
- Schema.org generators: For FAQ, Organization, Article, and Product schema markup
For Review Management (AEO):
- G2: Primary SaaS review platform. Directly cited in AI search answers
- TrustPilot: Broader product review visibility. Helps with ChatGPT visibility specifically
- Capterra: Additional review source for distributed authority
For Content Intelligence:
- Sybill: Call recording AI that extracts prospect questions for content ideas
- SE Ranking: Keyword volume, competition, and SERP feature data
- Google Ads Keyword Planner: Search appetite validation for new topics
- Warmly: Our own tool shows which ICP companies visit which blog posts. If your target accounts aren't reading your content, it doesn't matter how well it ranks
For Programmatic SEO Operations:
- Webflow API: Programmatic content updates, schema deployment, bulk operations
- Claude Code: Orchestrates the entire workflow. Pulls data from all sources, generates content briefs, deploys updates
- Google Analytics: Conversion tracking, AI referral source analysis
The total cost of this stack is way less than hiring a full SEO team. And it moves faster.
FAQs
Do B2B buyers actually use ChatGPT to research vendors?
Yes. 94% of B2B buyers now use LLMs during the purchasing process, according to 6sense's 2026 research. 68% start in AI tools before Google. They ask questions like "best [category] tools," "[tool A] vs [tool B]," and "alternatives to [incumbent vendor]." At Warmly, AI search went from 5% to 30% of inbound demo requests between February and March 2026.
How do I get my company mentioned in ChatGPT?
Build authoritative, structured content that AI engines can easily extract and cite. Specifically: add FAQ schema and Organization schema markup, include Quick Answer blocks in the first 500 words, update content every 60 days, build presence across 5+ authoritative sources (your site, Reddit, review sites, industry publications, YouTube), manage your reviews on G2 and TrustPilot, and fix Core Web Vitals. Brands with distributed authority see 60-80% higher citation rates.
What is generative engine optimization (GEO)?
Generative engine optimization is the practice of optimizing your content to appear in AI-generated search responses from tools like ChatGPT, Perplexity, Gemini, and Claude. It includes structured data markup (FAQ, Organization, Article schema), entity-dense content, freshness signals, distributed source authority, review management, video content optimization, and Core Web Vitals performance. It's the third layer of modern search strategy, alongside SEO and AEO.
How is AI changing B2B buying?
AI tools are replacing the early stages of the B2B buying journey. Instead of Googling, reading 10 blog posts, and building a shortlist manually, buyers ask AI tools for curated recommendations. 68% start their vendor research in AI tools. This means the AI's recommendation becomes the buyer's shortlist. If you're not recommended, you're not considered. At Warmly, we've seen enterprise companies across SaaS, security, and fleet management all cite AI tools as their discovery channel.
Should I stop investing in SEO?
No. AI search engines use Google and Bing results under the hood. Strong SEO foundations improve your GEO performance. SERP powers AEO, so do well in SEO and the other should follow. But you should add GEO-specific tactics: FAQ schema, Organization schema, Quick Answer blocks, content freshness, entity density, review management, video content, and multi-source distribution. Treat GEO as an additional layer on top of SEO, not a replacement.
How do I track AI search referral traffic?
Set up UTM parameters for AI referral sources. In Google Analytics, look for referral traffic from chat.openai.com, perplexity.ai, gemini.google.com, and claude.ai. Add a field to your demo request form asking "how did you hear about us" and track AI tool mentions. The direct measurement challenge is that AI search results are random every time, so there's no stable "ranking" to monitor. Use self-reported attribution as ground truth and SERP performance as a leading indicator. At Warmly, AI search traffic converts at 14.2%, which is 5x higher than Google organic.
What content format works best for AI citations?
Structured content with clear question-and-answer formats, comparison tables with specific data, numbered lists, and bold key phrases. 44.2% of all LLM citations come from the first 30% of text content, so front-load your most important information. Content with 15+ connected entities has 4.8x higher selection probability. Longer-form content around 2,000 words tends to perform better for AI citations according to Relixir's data.
How often should I update content for AI search?
At minimum, every 60 days. Pages updated within 60 days are 1.9x more likely to appear in AI citations. For competitive queries, monthly updates are better. The update doesn't need to be a full rewrite. Refresh stats, add new tools, update pricing, and add a current-year section. We updated 312 posts in one afternoon via the Webflow API. Programmatic approaches beat manual ones at scale.
What's the difference between AEO and GEO?
AEO (Answer Engine Optimization) focuses on getting your content cited in AI overviews, featured snippets, and zero-click answers. GEO (Generative Engine Optimization) focuses on being recommended in AI-generated responses like ChatGPT conversations and Perplexity answers. AEO is about answering questions. GEO is about being recommended as a solution. Both benefit from the same foundations: structured data, fresh content, and source authority.
How does ChatGPT decide which vendors to recommend?
ChatGPT evaluates source authority (Wikipedia is the #1 source at 47.9%), referring domain strength (30% weight), content freshness, structured data availability, and cross-source consistency. Critically, it also pulls from review platforms like G2 and TrustPilot. Negative reviews can surface directly in AI answers about your brand. Pages need authority, structure, recency, and positive sentiment to be cited consistently.
How does Perplexity decide which vendors to recommend?
Perplexity weighs content freshness heavily (40% weight) and pulls significantly from Reddit (46.7% of citations). Recent, well-structured content that's discussed positively in Reddit communities has the highest citation probability on Perplexity.
Is it worth optimizing for multiple AI search engines?
Yes. Only 11% of domains are cited by both ChatGPT and Perplexity. Each engine has different citation preferences. Optimizing for just one leaves you invisible on the others. The good news: the fundamentals (structured data, freshness, authority, reviews) help across all platforms.
What is the ROI of AI search optimization?
AI search traffic converts at 14.2% compared to 2.8% for Google organic at Warmly. That's 5x higher conversion. We went from 5% to 30% of inbound demo requests coming from AI search in just 60 days. Vercel reports that ChatGPT now drives approximately 10% of new signups, up from 1% six months ago. As AI search grows from roughly 10% to potentially 50% of B2B research traffic over the next 2-3 years, the ROI compounds.
How long does GEO take to show results?
Faster than traditional SEO. We saw Perplexity citation improvements within 2-4 weeks of our mass content update. ChatGPT results took 4-6 weeks. The key variable is how quickly the AI engines re-crawl and reindex your updated content. Fresh, structured content gets picked up faster. Revenue attribution (AI search as % of demos) shifted noticeably within 60 days.
Do negative reviews affect AI search visibility?
Yes. Negative reviews on G2, TrustPilot, and other review platforms can surface directly in AI search answers about your brand. When buyers ask AI tools about downsides of your product, the AI pulls from these review sources. Our CEO tracked a specific negative G2 review that was appearing in AI answers, spent two months resolving the underlying issue with the reviewer, and got it updated. Review management is now an AEO strategy, not just a customer success task.
Does video content help with AI search?
Yes. Video captures spots in Google AI Overviews and feeds into ChatGPT visibility. YouTube videos appear in AI Overview carousels for relevant queries, and since ChatGPT uses web search results, video content indirectly improves ChatGPT visibility too. Create video versions of top blog posts, optimize for target keywords, host on YouTube, and embed in original posts for dual visibility.
Last Updated: March 2026