The Industry Told You How to Get Cited by AI. We Tested It. They Were Wrong.

AI Index Lab publishes independent, controlled research on what actually predicts AI citation. Our first study analysed 892 URLs across 50 queries and 10 verticals — comparing pages AI chose to cite against pages it ignored for the same queries. The results challenge everything the Answer Engine Optimization industry assumes.

Common AEO advice says to add more images, more external links, more structured data, and author bylines. Our controlled study found none of these signals predict AI citation once you control for topic and discoverability. The only statistically significant signals were negative — pages with author tags and publish dates were less likely to be cited.

Why This Research Matters

Billions of dollars are being spent on AI search optimization based on assumptions, not evidence. SEO agencies are selling AEO services built on intuition borrowed from traditional search. AI Index Lab exists to replace that guesswork with controlled data. We publish weekly intelligence briefs (free) and monthly research reports so marketing teams can make evidence-based decisions about AI visibility.

Research Report #001: What Actually Predicts AI Citation?

Our founding study used a two-phase methodology. Phase 1 collected 646 AI-cited URLs from Perplexity and Claude across 50 informational queries. Phase 2 introduced controls — for each query, we collected top search results and compared cited pages against uncited pages that ranked for the same topic.

Key Finding: On-Page Signals Collapsed With Controls

SignalPhase 1 (Uncontrolled)Phase 2 (Controlled)p-value
Images+433%+5%0.685
External Links+278%+2%0.922
Word Count+83%-13%0.264
Statistical Claims+33%+38%0.152

Phase 1 differences were measuring characteristics of high-quality web pages in general — not predictors of AI citation specifically.

Statistically Significant Findings (All Negative)

SignalCohen's dp-valueDirection
Author Attribution-0.310.0002Cited pages have LESS
Publish Date-0.230.004Cited pages have LESS
Schema Type Count-0.170.032Cited pages have LESS
Open Graph Tags-0.180.030Cited pages have LESS

Four of five significant signals were negative. AI-cited pages have less metadata markup, not more. This directly contradicts standard AEO advice.

Our Methodology

AI Index Lab uses controlled comparison methodology — the same approach used in clinical trials and peer-reviewed research. We don't just study AI-cited pages in isolation. We compare them against pages that answer the same question but weren't cited. This controls for topic, discoverability, and baseline content quality.

Statistical Rigour

  • Welch's t-test for unequal variance group comparison
  • Cohen's d for standardised effect sizes
  • Per-query win rate analysis (within-query paired comparisons)
  • Bonferroni correction for multiple comparisons (24 simultaneous tests)
  • Full limitations disclosure including sample size power analysis

We publish our limitations alongside our findings. We list alternative explanations for every significant result. We report null results explicitly — because knowing what doesn't work is as valuable as knowing what does.

Weekly Intelligence & Research Subscriptions

Free — Weekly Email Digest

Key findings, industry news, and highlights from our latest research. Delivered every week to keep you informed about what's changing in AI search. No spam, no sales pitches — just data.

Practitioner — $49/month

Full monthly research reports with complete methodology, data tables, statistical analysis, and practitioner implications. Understand exactly what the data says and what it means for your strategy.

Agency — $149/month

Everything in Practitioner plus client-ready report templates, early access to studies before public release, quarterly deep-dive reports, and agency briefing calls. Position your agency as evidence-driven.

Enterprise — $499/month

Everything in Agency plus raw datasets for your own analysis, custom research requests, API access to the AI Citation Index (ACI), and dedicated research support. For teams building products and strategies on AI citation data.

Frequently Asked Questions

What is AI Index Lab?

AI Index Lab is an independent research organisation that publishes controlled studies on what actually predicts AI citation. We study how AI systems like ChatGPT, Perplexity, and Google AI Overviews choose which sources to cite, using rigorous statistical methodology rather than assumptions.

What did your first study find?

Our controlled study of 892 URLs found that common AEO recommendations may be misguided. Signals like images (+433% in uncontrolled analysis) collapsed to +5% (p=0.685) when compared within the same competitive set. The only statistically significant signals were negative — author tags and publish dates were both lower on AI-cited pages. This suggests AI citation operates at the domain level, not the page level.

How is this different from SEO tools?

SEO tools measure traditional search signals like backlinks, keyword rankings, and domain authority. AI Index Lab specifically researches what predicts AI citation — a fundamentally different question. Our controlled studies have already shown that many assumptions borrowed from SEO don't hold up when tested against AI citation data.

How often do you publish research?

Weekly intelligence briefs (free for all subscribers), monthly research reports with full methodology, and quarterly deep-dive studies. Our research cadence is designed to keep pace with the rapidly evolving AI search landscape.

Can I use your research with my clients?

Free subscribers can cite our published findings with attribution. Agency and Enterprise subscribers get client-ready report templates, white-label options, and early access to share with clients before public release.

Why Independent Research Matters

The AI search optimisation industry is making recommendations based on correlation, intuition, and analogy with traditional SEO. Nobody was running controlled studies. AI Index Lab was founded to change that.

Our first study proved the problem: signals that looked hugely important in uncontrolled analysis (images +433%, external links +278%) disappeared entirely when we introduced proper controls. Without controlled methodology, the industry is optimising for signals that don't actually predict citation.

We publish our methodology in full. We disclose our limitations. We list alternative explanations. We report null results. This is what separates research from marketing — and it's why agencies and enterprises trust our data to inform their strategies.

Upcoming Research

Report #002: Domain-Level Analysis (May 2026)

Our page-level study found that on-page signals explain very little once discoverability is controlled. The natural next question: does domain authority, backlink profile, or brand entity recognition predict AI citation when page-level signals don't? Report #002 will investigate domain-level factors across the same query set.

Report #003: AI Engine Comparison (June 2026)

Do ChatGPT, Perplexity, Claude, and Google AI Overviews cite the same sources? Or do they have distinct source preferences? We'll compare citation patterns across AI engines to understand whether optimisation should be engine-specific or universal.