AIRIX
ai visibility

How AI Chatbots Decide Which Businesses to Recommend

AIRIX Team··10 min read
Glowing AI chatbot interface with digital network connections representing how AI recommends businesses

How AI chatbots decide which businesses to surface — and how yours can be one of them.

How AI Chatbots Decide Which Businesses to Recommend (And How to Get Picked)

ChatGPT now has 800 million weekly active users (Tidio / Fullview, 2025). Every one of them asks questions that could lead to a business recommendation. The problem? Most businesses have no idea how those recommendations actually get made — and fewer still are doing anything to influence them.

This guide breaks down the exact mechanics behind AI chatbot recommendations, from training data signals to schema markup to the emerging metric called "Share of Model." Follow these steps and you'll understand — and act on — what it actually takes to get your business recommended by an LLM in 2026.


Key Takeaways

  • AI chatbots use a different selection process than Google — you're either mentioned or invisible
  • Schema markup boosts AI recommendation chances by over 36%, yet only 12.4% of domains use it
  • Platform citation behavior varies sharply: Perplexity averages 6.6 citations per response vs. ChatGPT's 2.6
  • "Share of Model" is the new metric that measures your brand's weight inside an LLM's knowledge base
  • Structured content formats like comparison pages drive 25%+ of all AI citations

What You Need Before You Start

Before optimizing for AI recommendations, you need three things in place: a publicly accessible website with crawlable content, a basic understanding of how LLMs differ from search engines, and a willingness to treat your content as training data — not just web pages. No coding skills required for most of these steps.


Step 1: Understand How LLMs Actually Select Businesses

67% of businesses now use AI systems powered by LLMs, up from 33% the prior period (McKinsey Technology Trends Outlook 2025 via Softweb Solutions, 2025). That doubles the competition for a remarkably small number of recommendation slots.

Here's the core difference from Google: traditional search shows 10 blue links. An AI chatbot gives one answer — and mentions two or three businesses at most. Perplexity averages 6.6 citations per response. ChatGPT averages just 2.6 (xFunnel AI Citation Analysis via Metrics Rule, 2025).

This makes LLM recommendations binary. You're either in the answer or you don't exist. There's no "page two" to fall back on. That asymmetry changes the strategic calculus entirely for any business that cares about discovery.

How LLMs Build Their Recommendations

LLMs generate recommendations through two mechanisms: base model knowledge baked in during training, and Retrieval-Augmented Generation (RAG), where the model fetches live web content at query time. Perplexity relies heavily on RAG. ChatGPT uses a mix. Understanding which mechanism a platform uses tells you where to focus — web presence for RAG, broader entity authority for base model.


Step 2: Audit Your "Share of Model"

INSEAD researchers introduced the metric "Share of Model" in 2025 to describe how frequently a brand appears in LLM-generated responses relative to its category competitors. Think of it as share of voice, but measured inside an AI's outputs rather than on a media channel.

To audit your current Share of Model, run 20-30 queries across ChatGPT, Perplexity, and Gemini that a customer might realistically ask about your category. Track how often your brand appears, how often competitors appear, and the sentiment of each mention. That raw frequency data is your baseline.

Why Share of Model Matters More Than Rankings

Shopping-related GenAI use grew 35% from February to November 2025, making it the third most popular GenAI application (BCG via Botpress, 2025). Meanwhile, 38% of companies plan to increase AI chatbot investment in 2026 (HubSpot via Botpress, 2025). The audience is moving toward AI-first discovery. Share of Model quantifies your position in that new arena before it becomes standard practice to track it.


Step 3: Implement Schema Markup Immediately

81% of web pages that receive citations from AI platforms include schema markup (AccuraCast study via Metrics Rule, 2025). That number alone should stop you in your tracks. Structured data is not a technical nicety — it is an LLM recommendation signal.

Implementing schema markup can boost a brand's chances of appearing in AI-generated summaries by over 36% (WPRiders Schema Implementation Analysis via Metrics Rule, 2025). Yet only 12.4% of all registered domains have implemented any Schema.org structured data (WPRiders / Metrics Rule, 2025). That gap is your opportunity.

Which Schema Types Matter Most for AI Recommendations

For most businesses, prioritize these schema types in this order:

  1. Organization — establishes your business as a named entity with verifiable attributes
  2. LocalBusiness (or its vertical subtypes) — critical for location-based recommendation queries
  3. Product / Service — describes your offerings in machine-readable format
  4. Review / AggregateRating — provides social proof signals that LLMs weight in recommendation contexts
  5. FAQPage — directly maps to the conversational query format LLMs use

Add these to your homepage and core service pages first. Use Google's Rich Results Test to validate. Then re-run your Share of Model audit 60 days later and compare.


Step 4: Create Content Formats LLMs Actually Cite

Over 25% of all citations in AI-generated answers come from comparative and listicle content formats (Metrics Rule / xFunnel AI Citation Analysis, 2025). Traditional blog posts account for only about 12% of citations from the same analysis of 2.6 billion AI citations. Format is not neutral — it is a ranking factor for LLMs.

This means your content strategy needs a deliberate shift. Comparison pages, "best of" lists, and structured buyer guides are not just good for conversions. They are the exact formats that LLMs reach for when constructing a recommendation response.

The Content Formats That Drive AI Citations

Build these content types around your category:

  • "[Your Product] vs. [Competitor]" comparison pages — LLMs use these when answering "which should I choose" queries
  • "Best [Category] for [Use Case]" listicles — directly match the intent behind recommendation queries
  • Structured FAQ pages — mirror the question-answer format of conversational AI interactions
  • "How to choose a [category]" guides — capture research-phase queries that precede a recommendation

Each page should open with a direct answer, include specific data points, and use H2/H3 structure that makes individual sections extractable. LLMs do not read pages the way humans do — they extract structured chunks.


Step 5: Build Entity Authority Across the Web

An LLM's recommendation confidence in your brand increases with the breadth and consistency of your entity presence across the web. Entity authority is how often and in what contexts your business name appears alongside authoritative signals — awards, press coverage, analyst mentions, review platform presence, and industry association listings.

Amazon generates 35% of its revenue from AI-powered recommendations (Prerender.io / Amazon Data, 2025). That is not accidental. Amazon has invested in being the most entity-rich source of product data on the web. The principle scales down: the more authoritative contexts your brand appears in, the more an LLM trusts it as a recommendation.

Practical Entity-Building Actions

Start with the sources LLMs are known to index heavily: Wikipedia (if you qualify), Crunchbase, LinkedIn company pages, industry-specific directories, G2 or Capterra (for software), Yelp or TripAdvisor (for local), and press coverage from recognized publications. Consistency matters — your business name, address, and category description should be identical across all sources. Inconsistency creates entity ambiguity, which reduces LLM confidence.


Step 6: Optimize for Platform-Specific Citation Behavior

Not all AI platforms recommend businesses the same way. Perplexity averages 6.6 citations per response. Gemini averages 6.1. ChatGPT averages just 2.6 (xFunnel AI Citation Analysis via Metrics Rule, 2025). Those differences should shape where you invest your optimization effort.

ChatGPT already accounted for 20% of Walmart's referral clicks in August 2025, up from 15% the prior month (Similarweb cited by BCG via Botpress, 2025). Despite having the fewest citations per response, ChatGPT generates significant commercial traffic — which means the stakes per citation slot are highest there.

Platform Prioritization by Business Type

For local and service businesses: Prioritize Perplexity and Gemini first. Higher citation counts mean more opportunities per response. These platforms rely more on real-time web retrieval (RAG), so fresh, schema-tagged local content has immediate impact.

For e-commerce and product businesses: ChatGPT should be your primary focus. Its 800 million weekly active users and proven referral traffic patterns — as evidenced by Walmart's data — make each citation slot disproportionately valuable. Focus on product schema, review data, and structured comparison content.

For B2B and software companies: Run balanced optimization across all three. B2B queries tend to be research-heavy, meaning longer response formats with more citations. Gemini's 6.1 average is a favorable environment.


Common Mistakes Businesses Make with AI Recommendations

Mistake 1: Treating AI Optimization Like SEO

The biggest error is applying Google SEO logic to LLM visibility. Keyword density, backlink volume, and meta descriptions are weak signals for LLMs. Structured data, entity consistency, and content format are strong signals. They require different tactics and different success metrics.

Mistake 2: Ignoring the Binary Nature of LLM Recommendations

Many businesses accept "not showing up today" the same way they accept being on page two of Google. The strategies are different. With LLMs, inconsistent content, missing schema, or low entity authority means complete invisibility. There is no partial credit. Address this as a binary problem.

Mistake 3: Only Optimizing Your Own Website

12.3% of US online shoppers used generative AI tools for shopping-related activities as of July 2025 (Tidio Research, 2025). These users are asking questions that reference third-party review platforms, press articles, and directories — not just brand websites. Businesses that only optimize their own site miss the majority of the data LLMs use to form recommendations.

Mistake 4: Setting It and Forgetting It

LLMs are retrained and updated continuously. Perplexity fetches live web results. A schema implementation from six months ago may have a different impact today. Treat AI visibility as an ongoing channel with regular audits — not a one-time technical project.


Frequently Asked Questions

Q: How do AI chatbots like ChatGPT decide which businesses to recommend?

LLMs select businesses based on training data frequency, entity authority signals, structured data presence, and content format relevance. 81% of pages cited by AI platforms include schema markup (AccuraCast study via Metrics Rule, 2025). Platforms using RAG (like Perplexity) also fetch live web content at the moment of the query.

Small businesses can absolutely appear in AI recommendations — the mechanism is not purely based on brand size. Schema implementation, which only 12.4% of domains have done (WPRiders / Metrics Rule, 2025), creates a structural advantage that any business can exploit regardless of budget or size.

Q: What is "Share of Model" and why does it matter?

Share of Model measures how frequently your brand appears in LLM-generated responses relative to category competitors — the AI equivalent of share of voice. It matters because shopping-related GenAI use grew 35% in 2025 (BCG via Botpress, 2025), and businesses without a Share of Model baseline have no way to measure whether their optimization efforts are working.

Google returns ranked lists where position 2-10 still captures traffic. AI chatbots generate singular answers with 2-7 business mentions per query. ChatGPT averages just 2.6 citations per response (xFunnel AI Citation Analysis via Metrics Rule, 2025), making the recommendation landscape fundamentally binary in a way traditional search is not.

Q: Which content formats are most likely to get cited by AI chatbots?

Comparative and listicle formats generate over 25% of all AI citations, versus roughly 12% for traditional blog posts, based on analysis of 2.6 billion citations (Metrics Rule / xFunnel AI Citation Analysis, 2025). FAQ pages, structured buyer guides, and "best of" lists consistently outperform narrative content for AI citation rates.


Conclusion

AI chatbot recommendations are not a future marketing channel. With 800 million weekly ChatGPT users and AI already driving 20% of Walmart's referral clicks, they are an active, measurable source of business discovery right now. The businesses winning in this environment are the ones that understand the signals — schema markup, entity authority, content format, platform-specific behavior — and act on them systematically.

The six steps in this guide give you a concrete starting point: audit your Share of Model, implement schema markup, build citation-friendly content, establish entity authority, and optimize for the platform where your customers are most active.

If you want to skip the manual audit and see exactly where your business stands in AI recommendations today, AIRIX tracks your AI visibility across platforms and surfaces the specific gaps holding your brand back from LLM citations. Check your AI Visibility Score and find out whether AI chatbots are mentioning your business — or someone else's.

Check Your AI Visibility Score

Find out if AI chatbots like ChatGPT, Gemini, and Claude are recommending your business.

Scan Your Business Free