5W The AI Communications Firm
5W AI Visibility Research · Venture Capital · Two-Wave Benchmark

In two independent waves of testing, Andreessen Horowitz is the only U.S. venture firm whose own domain ranks among the top cited sources inside major AI assistants.

A two-wave public benchmark of how often U.S. venture capital firms and named partners are surfaced, cited, and recommended inside ChatGPT, Claude, Gemini, Perplexity, and Google AI Overviews. Findings reported here held across both retrieval waves.

21.4%
a16z share of cited responses across U.S. VC queries
55%
Captured by three firms — a16z, Sequoia, Y Combinator
37%
a16z share within AI-investment-specific queries
1
Firm-owned editorial domain in the top 10 cited sources (a16z.com)
Figure 01 · The structural anomaly

a16z.com is the only venture-firm-owned domain in the top sources AI assistants cite about U.S. venture capital.

Share of all citations surfaced by AI assistants when answering questions about U.S. venture capital firms and partners. January–May 2026, 28,400 prompts.

Wikipedia 18.7% TechCrunch 16.4% Crunchbase 12.8% The Information 9.2% a16z.com 7.1% ← Only VC-owned domain in top 10 Forbes (Midas List) 5.9% PitchBook 4.6% Bloomberg 3.8% The Wall Street Journal 3.1% All other publishers 18.4% 0% 5% 10% 15% 20%
Source · 5W Venture Capital AI Visibility Index 2026 · n = 28,400 prompts · ChatGPT, Claude, Gemini, Perplexity, Google AI Overviews · Jan–May 2026
Executive Summary

A two-wave benchmark of how AI assistants describe U.S. venture capital.

Founders, LPs, and reporters increasingly begin venture-capital research inside ChatGPT, Claude, Gemini, Perplexity, and Google AI Overviews. The Venture Capital AI Visibility Index 2026 measures how the answers those systems return distribute attention across firms, partners, and source publishers.

The Index analyzed 28,400 prompts across the five leading AI assistants, run in two independent waves — Wave 1 in January–February 2026, Wave 2 in April–May 2026 — to test whether observed patterns held across retrieval drift, model updates, and time-of-day variance. All findings reported below held across both waves within reporting tolerance (≤1.5 percentage-point delta on firm-level Citation Share, ≤2.0 pp on source-level share).

Citation Share · The share of retrieved AI responses in which a firm, partner, or source is named, recommended, or cited as a referenced source. Mentions, recommendations, and source citations are tracked separately and weighted equally unless otherwise noted.

In the dataset, the most-cited U.S. venture firm — Andreessen Horowitz — accounts for 21.4% of cited responses across the five engines. Sequoia Capital follows at 17.8%. Y Combinator follows at 15.9%. The three together account for 55.1% of all observed VC citations in the test set.

At the partner level, Marc Andreessen accounts for 14.2% of named-partner Citation Share. Roelof Botha (6.8%), Mike Moritz (5.9%), Ben Horowitz (5.4%), and Peter Thiel (4.8%) follow.

The most consequential observation in the dataset is structural. a16z.com is the only firm-owned editorial domain to rank among the top 10 cited sources, appearing in 7.1% of retrieved responses about U.S. venture capital. The remaining top sources — Wikipedia, TechCrunch, Crunchbase, The Information, Forbes, PitchBook — are third-party publishers. In this dataset, AI assistants appear to treat a16z-published content as primary source material in a way they do not for any other U.S. venture firm.

"Andreessen Horowitz spent a decade building a media company inside a venture firm. They're the only ones who did. In the dataset, the AI now treats their content as source — and the rest of the industry is being described by Wikipedia and TechCrunch."

Ronn Torossian · Founder and Chairman, 5W

The Findings

Ten observations from 28,400 prompts across two waves.

All numerical findings reflect the average of Wave 1 (Jan–Feb 2026) and Wave 2 (Apr–May 2026). Cross-wave deltas are reported where greater than 1.0 percentage point. Findings that did not hold across both waves are excluded from this Index.

01

Three firms account for 55% of cited responses

Andreessen Horowitz (21.4%), Sequoia Capital (17.8%), and Y Combinator (15.9%) together accounted for 55.1% of cited responses across the two waves. The remaining 45% distributed across 57 other firms in the test set. Cross-wave delta on the top three: ≤0.9 pp. The level of concentration appears stable across retrieval drift, model updates, and time-of-day variance during the test period.

02

One firm-owned editorial domain ranks in the top 10 cited sources

Across both waves, a16z.com appeared in 7.1% of retrieved responses about U.S. venture capital — placing it fifth among all cited sources, ahead of Forbes, PitchBook, Bloomberg, and the WSJ. No other firm-owned editorial domain ranked in the top 25. In this dataset, AI assistants appear to treat a16z-published content as primary source material rather than as marketing collateral.

03

One named partner accounts for 14.2% of partner-level citations

Marc Andreessen: 14.2%. Roelof Botha: 6.8%. Mike Moritz: 5.9%. Ben Horowitz: 5.4%. Peter Thiel: 4.8%. Marc Andreessen's share appears to be reinforced by cross-platform footprint — a16z content, podcast appearances, op-ed bylines, and active X presence often surface in the same retrieved answer.

Figure 02 · Brand size vs AI visibility

Assets under management no longer predict VC AI Citation Share.

Each bubble plots a venture firm's AUM against its share of AI venture-capital citations. Firms above the dashed line punch above their AUM weight in AI search. Firms below it punch below.

0% 5% 10% 15% 20% 25% $0 $20B $40B $60B $80B $100B Assets Under Management AI Citation Share ↑ Above-weight in AI ↓ Below-weight in AI a16z 21.4% · $45B AUM Sequoia 17.8% · $85B AUM Y Combinator 15.9% · Accelerator Founders Fund 5.2% · $13B AUM Benchmark Accel Khosla General Catalyst Lightspeed Insight Partners 1.6% · $80B AUM Tiger Global Bessemer AI overperformer · Citation Share exceeds AUM weight AI underperformer · Citation Share below AUM weight
Source · 5W Venture Capital AI Visibility Index 2026 · AUM per public filings, PitchBook, and firm disclosures, Q1 2026. Y Combinator plotted by cumulative funding deployed.
04

The a16z share widens within AI-investment queries

Within the sub-set of prompts narrowed to AI startups, AI investors, and AI-specific deal activity, a16z's share rose to 37.1% — meaningfully above its overall 21.4%. Sequoia (18.4%), Khosla Ventures (9.6%), Founders Fund (7.2%), and Greylock (4.8%) followed. The pattern was consistent in both waves (Δ ≤1.2 pp on top five). Firms that published heavily on AI thesis content during 2022–2023 appear in this dataset to disproportionately surface in AI-specific retrieval.

05

Y Combinator out-cites every VC firm on broad startup-funding questions

For prompts asking "where to raise money," "best startup investors," or "how do I get funded," Y Combinator surfaced more often than any traditional venture firm in the dataset. YC's 15.9% Citation Share appears to be reinforced by two decades of structured founder content, essays, and Hacker News authority — sources that AI assistants retrieved with high frequency in both waves.

06

Solo capitalists and emerging managers register near-zero visibility

Across the 28,400 prompts, the combined category of solo capitalists, emerging managers, and sub-$500M funds — including several of the highest-performing recent VC vintages — registered less than 1.2% combined Citation Share. AI assistants in this dataset overwhelmingly surfaced established firm entities. Fund performance did not appear to be a primary signal for retrieval.

07

Several large firms register Citation Share well below their AUM weight

Insight Partners (~$80B AUM): 1.6% Citation Share. Tiger Global (~$50B): 1.3%. Bessemer (~$20B): 1.1%. NEA and Lightspeed sat in the same band. In the test set, AUM was not a reliable predictor of Citation Share; sustained earned media volume and owned-content publishing cadence correlated more strongly with retrieval frequency.

08

Reputational events persisted in retrieved responses years after the event

FTX continued to surface in 23% of crypto-VC responses across both waves. The WeWork episode continued to surface in 14% of SoftBank-related responses more than five years after the public unwinding. Persistence patterns were stable across both waves, suggesting these references are embedded in retrieval and training data rather than time-bound retrieval drift.

Figure 03 · Five AI assistants, five different rankings

No two AI assistants return the same VC ranking.

Citation Share by engine for the top 10 U.S. venture capital firms. Darker cells = higher share. Read across each row to see how each firm is ranked differently by each AI assistant.

ChatGPT Claude Gemini Perplexity Google AIO Andreessen Horowitz 19.8% 22.3% 23.6% 24.8% 16.5% Sequoia Capital 22.4% 16.7% 18.2% 14.9% 17.0% Y Combinator 17.2% 14.8% 15.4% 16.1% 16.0% Founders Fund 5.0% 6.4% 4.8% 5.6% 4.2% Benchmark 4.5% 5.1% 4.3% 4.7% 4.4% Accel 3.8% 4.2% 4.1% 3.5% 3.9% Khosla Ventures 3.1% 4.1% 3.5% 3.0% 3.3% Kleiner Perkins 2.6% 3.0% 2.7% 2.5% 3.2% General Catalyst 2.4% 2.9% 2.5% 2.6% 2.6% Greylock 2.1% 2.8% 2.2% 2.4% 2.0% Citation Share Low High →
Source · 5W Venture Capital AI Visibility Index 2026 · All venture queries · n = 5,680 prompts (per engine)
09

The five engines return materially different rankings — and the divergence is interpretable

In the dataset, each engine produced a different top-ranked firm. ChatGPT favored Sequoia, consistent with strong Wikipedia weighting and legacy reference density. Gemini and Perplexity favored a16z, consistent with retrieval architectures that weight recency, owned-domain authority, and X/Twitter signals more heavily. Claude produced the most balanced distribution and the highest probability of surfacing Khosla and Founders Fund. Google AI Overviews showed the highest within-engine variance across re-runs. Wikipedia citation density was high across all five engines (range: 16.4–22.1%); the variation lay in which other sources each engine ranked alongside it.

10

Firm names and partner names are frequently conflated in retrieved responses

In 43% of branded firm prompts, AI assistants surfaced a named partner — sometimes a current general partner, sometimes a founder no longer active, sometimes a partner who has departed the firm. "Andreessen" and "a16z" surfaced interchangeably in every engine. Firms with strong individual partner brands but weaker firm-level brands — and firms in the inverse pattern — were systematically affected. This conflation effect was stable across both waves.

The Index

Venture Capital AI Visibility Index 2026.

VC Firms — Top 15
Rank Firm Citation Share
01Andreessen Horowitz (a16z)21.4%
02Sequoia Capital17.8%
03Y Combinator15.9%
04Founders Fund5.2%
05Benchmark4.6%
06Accel3.9%
07Khosla Ventures3.4%
08Kleiner Perkins2.8%
09General Catalyst2.6%
10Greylock2.3%
11Lightspeed Venture Partners1.9%
12NEA1.7%
13Insight Partners1.6%
14Index Ventures1.4%
15Tiger Global1.3%
Named Partners — Top 10
Rank Partner · Firm Role Share
01Marc Andreessen · a16zActive GP / Co-founder14.2%
02Roelof Botha · SequoiaActive GP / Senior Steward6.8%
03Mike Moritz · SequoiaEmeritus / Former Chairman5.9%
04Ben Horowitz · a16zActive GP / Co-founder5.4%
05Peter Thiel · Founders FundActive Partner / Co-founder4.8%
06Vinod Khosla · Khosla VenturesActive GP / Founder4.1%
07Reid Hoffman · GreylockActive Partner3.6%
08Sam Altman · YCFounder Legacy / Former President3.2%
09Paul Graham · YCFounder Legacy / Co-founder3.0%
10Bill Gurley · BenchmarkEmeritus / Former GP2.7%

Role classifications applied at time of testing. AI assistants in this dataset frequently did not distinguish between Active GPs, Emeritus partners, and Founder Legacy figures when surfacing names against firm-level prompts.

Top Cited Sources — All VC Queries
Rank Source Share of Citations
01Wikipedia18.7%
02TechCrunch16.4%
03Crunchbase12.8%
04The Information9.2%
05a16z.com (only VC-owned domain in top 10)7.1%
06Forbes (incl. Midas List)5.9%
07PitchBook4.6%
08Bloomberg3.8%
09The Wall Street Journal3.1%
10All other publishers18.4%

"Across two waves of testing, Y Combinator out-cited every venture firm in the dataset on broad questions about startup funding. Two decades of structured content, founder essays, and Hacker News authority appear in retrieval more often than any traditional VC's owned content."

Ronn Torossian · Founder and Chairman, 5W

Methodology

How the Index was built.

The Venture Capital AI Visibility Index 2026 analyzed 28,400 prompts across ChatGPT, Claude, Gemini, Perplexity, and Google AI Overviews, run in two independent waves to test for stability across retrieval drift and model updates.

Two-wave structure

Wave 2 used the same prompt set as Wave 1 with no modification. Only findings stable across both waves within reporting tolerance (≤1.5 percentage-point delta on firm-level Citation Share; ≤2.0 pp on source-level share) are published here. Findings unstable across waves were excluded from the Index.

Prompt design

Queries simulated real founder, LP, journalist, and analyst research behavior. Prompts included branded firm queries ("What does Andreessen Horowitz invest in?"), non-branded category queries ("Best VC firms for AI startups"), comparison queries ("Sequoia vs Andreessen Horowitz"), intent-driven queries ("Who funds Series A SaaS startups in 2026?"), partner-level queries ("Top venture capitalists in the United States"), and crisis or controversy queries. Prompts were distributed evenly across the five engines so that each engine received the same prompt mix per category.

Seven venture capital categories measured

  1. Generalist venture capital firms
  2. Named partner / individual venture capitalists
  3. AI and machine learning investing
  4. Crypto and Web3 investing
  5. Seed, accelerator, and pre-seed
  6. Growth, crossover, and late-stage
  7. Sector-focused investing (biotech, fintech, climate, defense)

Sampling

Each prompt was issued three times per engine within a wave, with responses sampled at varied time-of-day windows to reduce within-engine retrieval drift. Reported Citation Share values represent the average of all retrieved responses for a prompt across both waves.

What "Citation Share" means operationally

Three distinct response signals were tracked per retrieved answer:

For the headline metric reported as Citation Share, all three signals were weighted equally per retrieved response. Source-level analyses (the publisher rankings in Figure 01) used source citations only.

Cross-engine normalization

Each engine was weighted equally in aggregate figures, regardless of differences in response length or default-citation frequency per engine. This was a deliberate choice: weighting by raw citation volume would have over-weighted Perplexity and Google AI Overviews, both of which return more citations per response than ChatGPT, Claude, or Gemini by default. Where per-engine results diverge, those differences are reported separately (Figure 03).

Retrieval vs training-data signal

This benchmark does not separately attribute citations to retrieval (live web search at query time) versus training data (pre-trained associations). The two are increasingly entangled in production AI assistants and not externally observable. Where engine-level behavior differs from a pure retrieval-only baseline (Figure 03), that variance is interpreted as a combination of retrieval architecture, training-data composition, and engine-specific ranking heuristics.

Limitations

Results reflect sampled outputs during a defined testing window. AI models, training data, retrieval indexes, and ranking systems evolve continuously; results may shift outside the test period. The Index is best read as a structured snapshot of observed system behavior across two waves — not as a continuous live measurement. The full prompt set, per-engine response logs, and category-level datasets are available on request for replication.

Implications

Five operational moves the data supports.

Run a baseline AI visibility audit against firm name and three closest peers — within 30 days

Run the same 25 prompts across all five engines twice in a 30-day window. Score firm Citation Share, named-partner share, and top three cited sources per engine. The output is a baseline against which any subsequent communications activity can be measured. Without a baseline, no claim of movement is meaningful.

Audit the firm Wikipedia entry and the top three GPs' Wikipedia entries — this quarter

Wikipedia surfaced in 16.4–22.1% of retrieved responses across all five engines in this dataset. For firms below the top 10 in Citation Share, the Wikipedia entry is often the answer the AI returns. Most firms have not reviewed their Wikipedia entry within the last 24 months. Three named partners in the top 10 of this Index have no individual Wikipedia entry at all.

Treat earned coverage in TechCrunch, The Information, Crunchbase, and Forbes as the primary lever

Together these four sources supplied 44.3% of cited responses in the dataset — more than every firm-owned domain combined except a16z.com. A measurable lift in Citation Share for any firm not currently in the top 10 is most likely to come from sustained earned coverage in this small group of publishers, not from owned content alone.

Build named-partner authority and firm authority in parallel — not sequentially

43% of branded firm prompts surfaced a named partner. Where partner brand is weaker than firm brand (or vice versa), the gap appears in retrieval. Communications programs that build firm-level GEO without simultaneous named-partner authority will under-perform their potential by an observable margin.

Build the citation infrastructure before the next vintage — not during it

Reputational events from 2019, 2022, and earlier continued to surface in retrieved responses across both waves of this benchmark. Once embedded, those references appear stable across model updates. The implication for fundraising and reputation management is direct: build the underlying retrieval picture in a 12–18 month window of low-pressure activity, not in the 90 days before going to LPs.

FAQ

Venture Capital AI Visibility Index — Q&A.

What is the Venture Capital AI Visibility Index?

The Venture Capital AI Visibility Index 2026 is the first public benchmark measuring how often major U.S. venture capital firms and named partners are surfaced, cited, and recommended inside ChatGPT, Claude, Gemini, Perplexity, and Google AI Overviews. It was produced by 5W, the AI Communications Firm.

What is Citation Share?

Citation Share is the share of AI responses in which a firm or partner is named, recommended, or cited as a source. It measures how often a firm or person appears inside AI-generated answers — the new layer where founders, LPs, and journalists research venture capital.

Which venture capital firm has the most AI visibility?

Andreessen Horowitz (a16z) captures 21.4% of U.S. venture capital Citation Share — more than any other firm. Sequoia Capital ranks second at 17.8%. Y Combinator ranks third at 15.9%. Together the three firms capture more than 55% of all venture capital AI citations.

Which venture capitalist has the most AI visibility?

Marc Andreessen captures 14.2% of named-partner Citation Share — more than any other venture capitalist. Roelof Botha of Sequoia ranks second at 6.8%. Mike Moritz ranks third at 5.9%, followed by Ben Horowitz at 5.4% and Peter Thiel at 4.8%.

What sources do AI assistants cite when answering questions about venture capital?

Wikipedia, TechCrunch, Crunchbase, and The Information together supply 57% of all venture-capital-related AI citations. a16z.com is the only venture-firm-owned domain to rank in the top 10 sources, appearing in 7.1% of model responses tested.

Which VC dominates AI investment queries specifically?

Andreessen Horowitz captures 37.1% of Citation Share for venture queries specifically about AI investments — substantially more than its overall VC share of 21.4%. Sequoia ranks second at 18.4%, followed by Khosla Ventures at 9.6%, Founders Fund at 7.2%, and Greylock at 4.8%.

Why is a16z so dominant in AI search?

Andreessen Horowitz is the only venture firm that operates an in-house media organization at scale — a16z.com, the a16z Podcast network, Future, and Marc Andreessen and Ben Horowitz's individual content platforms. AI assistants treat a16z-published content as primary source material, which compounds the firm's citation dominance over time.

What is GEO?

GEO — Generative Engine Optimization — is the practice of building brand authority and content infrastructure that AI assistants surface, cite, and recommend. It is the discipline replacing SEO in an AI-mediated discovery layer.

How many VC firms were tested?

The Index tested 60 leading U.S. venture capital firms and 100 named partners across 28,400 prompts between January and May 2026.

Work with 5W

Founders are researching your firm in ChatGPT right now.

5W builds the citation infrastructure that puts venture firms and named partners inside the AI answers founders, LPs, and reporters see first. Earned media. GEO. AI visibility measurement. One firm.

Related Research

More from the 5W AI Visibility Index series.