Vol. 01 · No. 14 Brooklyn, NY Sunday, April 26, 2026
Local AI Search Audit · Free this week
SIGNAL/AI
← Back to Journal
Field Notes · Apr 26, 2026

Same Prompt. Four Engines. Here Is the Spread.

Signal tested identical local search prompts across ChatGPT, Perplexity, Google AI Overviews, and Gemini. The citation sources, confidence levels, and named businesses differed sharply.

Every time I run a new client audit, I run the same 10 prompts across all four major AI engines before we start any work. Same query, same phrasing, same device and browser profile with no history. I run each prompt three times per engine to filter out answer variability. Then I document exactly which businesses are named, what source they're cited from, and how the response is structured.

The spread is almost always striking. The same prompt — "best optometrist in Crown Heights" or "private BJJ lessons Park Slope" — returns materially different answers on different engines, names different businesses, and draws on different underlying data sources. Understanding that spread is the entire job of AI search optimization. You can't optimize for one engine and expect the results to carry over. The four systems are making fundamentally different decisions.

The Methodology

Ten prompts, all Brooklyn local business queries. Each formatted as a natural language question, not a keyword string. The prompts covered optometry (Crown Heights), BJJ and martial arts (Park Slope, Gowanus), general fitness (Williamsburg, Prospect Heights), home services (Bushwick, Bed-Stuy), and a restaurant query (Carroll Gardens) included as a category where AI local search is known to perform differently.

Each prompt was run three times per engine over two days, at similar times of day. I took the modal result — the answer that appeared in at least two of three runs — as the representative output. Where answers varied significantly across runs, I noted that instability separately, because answer instability is itself a data point about how confidently that engine knows the relevant businesses in that category.

The engines tested: ChatGPT (GPT-4o with search enabled), Perplexity (default mode with live web search), Google AI Overviews (triggered via Chrome, logged out), and Gemini (Gemini 1.5 Pro via gemini.google.com).

Why Each Engine Cites Differently

Before going into the results, the structural reason for the spread: each engine is retrieving information from a different underlying data layer, through a different retrieval mechanism, with different confidence thresholds for naming a specific local business.

ChatGPT: Training data plus Bing

When ChatGPT search is enabled, it's running a Bing web search and synthesizing results with its training data. This means its local answers have two layers: what the model learned during training (which has a knowledge cutoff and reflects whatever sources made it into the training corpus) and what Bing returns for the live query. ChatGPT tends to be more confident naming businesses with strong Bing-indexed web presences and significant review volume on platforms Bing indexes well. It also shows a slight bias toward businesses that appear in Yelp results, since Yelp content is prominent in Bing's local results.

Perplexity: Live web retrieval

Perplexity runs a live web search for every query, then synthesizes cited sources into a response. It shows its sources explicitly — typically 4 to 8 citations per answer. This is the most transparent engine to analyze because you can see exactly which pages it's drawing from. Perplexity tends to favor newer, clearly structured pages over older authoritative ones, because its retrieval model is asking "which page best answers this specific question right now?" rather than "which page has the most authority overall?" It cites Google Business Profile pages, individual business websites, and local directories, in roughly that order of frequency.

Google AI Overviews: The search index

Google AI Overviews runs directly on Google's search index, which means it reflects Google's own ranking signals — PageRank, E-E-A-T, structured data, local pack prominence. Businesses that rank in the local 3-pack or have strong organic rankings tend to appear in AI Overviews for the same queries. This makes Google AI Overviews the most path-dependent of the four: your existing Google SEO work carries over more directly here than anywhere else. It also means older, more established businesses tend to perform better in AI Overviews than in Perplexity or ChatGPT, simply because their legacy signals are baked into the index.

Gemini: Google Knowledge Graph

Gemini's local search answers lean heavily on the Google Knowledge Graph and Google Business Profile data. It tends to be more conservative than the other three — more likely to give a generic answer ("here are some well-reviewed optometrists in Crown Heights") with a list format populated from GBP data than to give a confident single recommendation. When Gemini does name a specific business confidently, that business typically has a well-completed GBP profile with strong review signals. Gemini is the engine most sensitive to GBP health, in my experience.

The Results: Crown Heights Optometry

The prompt: "Who is the best optometrist in Crown Heights, Brooklyn? I need someone who takes my insurance and has good reviews."

This is the category where I have the most data, because Nostrand Optical — a Signal client — is one of the businesses in this competitive set.

  • ChatGPT: Named three businesses. Nostrand Optical appeared first in two of three runs. Citations drew from Yelp, Healthgrades, and the practice's own website. The response included insurance information pulled from the site's structured content.
  • Perplexity: Named two businesses with explicit citations. Nostrand Optical cited first in all three runs, with source links to the GBP listing and the practice's service page. The cited excerpt matched the declarative entity copy on the homepage almost verbatim.
  • Google AI Overviews: Showed a local pack-style answer with 3 businesses. Nostrand Optical appeared, but not first — a longer-established practice with more Google reviews ranked above it. The AI Overview reflected Google's organic ranking signals closely.
  • Gemini: Gave a list of 4 businesses populated from GBP data. Nostrand Optical appeared third. Response was less specific about any individual practice, more of a formatted directory output.
Key observation

The same business appeared in all four engines, but at different positions and with different citation confidence. Optimizing for Perplexity and ChatGPT required different levers than optimizing for Google AI Overviews. Getting to first in all four requires working all the layers simultaneously.

The Results: Park Slope BJJ

The prompt: "Where can I take my first BJJ class in Park Slope or nearby? I'm a complete beginner, adult."

  • ChatGPT: Named one studio and one private instructor. Brooklyn BJJ Lessons appeared as the private instruction option in all three runs. The response included a sentence matching the instructor bio on the site and noted the beginner-specific framing.
  • Perplexity: Named Brooklyn BJJ Lessons first in two of three runs. Cited the site's FAQ page directly — specifically the page answering "what should I expect at my first BJJ class." This is exactly the kind of content that Perplexity's retrieval model favors.
  • Google AI Overviews: Named two established BJJ academies with physical studio locations, neither of which was Brooklyn BJJ Lessons (a private instructor without a storefront). The Overviews answer tracked closely with the local 3-pack, which skews toward businesses with a fixed address, longer review history, and more backlinks. A private instruction business operates at a structural disadvantage here.
  • Gemini: Similar to Google AI Overviews — listed academies with GBP listings and physical addresses. Brooklyn BJJ Lessons appeared in one of three Gemini runs but not consistently.

This result illustrates an important split: Perplexity and ChatGPT were able to understand and respond to the "complete beginner, private lesson" framing in the query and match it to the right business. Google AI Overviews and Gemini defaulted to the businesses with the strongest location-based signals, which aren't necessarily the best answer to the actual question asked.

Perplexity is answering the question. Google is ranking the businesses it already knows best for the category.

Which Businesses Appeared in All Four Engines

Across the 10 prompts, only a small number of businesses appeared in all four engines for the same query. Looking at the patterns, these businesses shared a specific combination of signals:

  • A well-completed Google Business Profile with 50+ reviews and a strong star rating
  • A website with clear entity prose, structured data markup, and service-specific pages
  • Listings in at least 15–20 directories with consistent NAP (name, address, phone)
  • Some form of external citation — press mentions, blog features, or directory reviews beyond Yelp and Google

Businesses that appeared in only one or two engines typically had a strong signal in that engine's preferred data source and weak signals elsewhere. A business with an excellent Yelp profile but a minimal website showed up in ChatGPT (Bing/Yelp-indexed) but not Perplexity. A business with good Google organic rankings showed up in Google AI Overviews but not Perplexity. Fragmented presence equals fragmented AI visibility.

What This Means for Optimization Strategy

The spread across engines means a single-track optimization strategy won't get you to omnipresence. Here's how I think about the four engines in terms of what to prioritize:

For Perplexity visibility

Invest in structured, declarative content on your own site. Perplexity's live web retrieval is the most responsive to content quality and freshness. FAQ sections, service pages with specific entity-declarative prose, and Schema markup on every relevant page will move your Perplexity ranking faster than anything else.

For ChatGPT visibility

Yelp, Bing, and Healthgrades (for health businesses) are your leverage points. ChatGPT's search integration leans on Bing's index, and Bing surfaces Yelp content prominently. A complete, review-rich Yelp profile with recent activity is the fastest path to ChatGPT citation for most local Brooklyn businesses.

For Google AI Overviews

Your existing Google SEO is the foundation. Local pack presence, E-E-A-T signals, and strong review velocity on GBP all carry directly into AI Overviews. There's less lift you can get from content optimization here that isn't already captured by traditional local SEO work.

For Gemini

GBP completeness is the most direct lever. Every field in your Google Business Profile — categories, services, Q&A, photos, hours, attributes — feeds directly into Gemini's local answer quality. A stripped-down GBP profile is a Gemini visibility problem.

The audit I run for every new client

This 10-prompt, four-engine test is the first thing I do for any new client. It takes about 90 minutes to run correctly, document, and analyze. If you want to see where your business stands across all four engines — and where your competitors are filling the gaps you're leaving open — that's exactly what the free 20-minute audit covers, in live screen-share format.

The bottom line from running this test across dozens of Brooklyn businesses: you should assume you are invisible in at least one of the four engines right now, regardless of how strong your Google presence is. The engines are drawing on different data, asking different questions, and reaching different conclusions about who the right answer is for any given local query. That's not a problem you can solve with one fix. It's a problem you solve by building presence across all the layers those engines draw from.

See if AI is missing your business.

20-minute live audit. I run your business through ChatGPT, Perplexity, Google AI, and Claude. Free. No pitch.

Book the free audit