AI, Antitrust, and the Future of the Marketplace of IdeasBy Maurice Stucke
Nov 17, 2025 | Technology & Innovation
Grounding: How LLMs Depend on Search
To understand the new antitrust challenge, we must understand “grounding.”
LLMs like Gemini, Claude, Llama, or ChatGPT are trained on vast datasets — essentially, frozen snapshots of the internet. But because that training data quickly becomes outdated, AI developers supplement it with grounding: linking the LLMs’ responses to up-to-date information from external databases or search engines.
Indeed, the district court in United States v. Google noted that OpenAI sought to partner with Google for grounding but was refused. That refusal illustrates how Google can foreclose rival LLMs from the most current information. The consequences are visible in practice. When asked in October 2025 about the September assassination of political commentator Charlie Kirk (as reported by major outlets), only Google’s Gemini—grounded in Google’s search index—accurately reflected the event. Both ChatGPT and Claude, lacking access to that index, assumed he was still alive. This disparity underscores how control over search grounding confers not only market power but directly impacts the quality of the LLM’s responses, especially for long-tail and “fresh” queries about recent events. When told of its error, Claude, whose knowledge cutoff at that time was January 2025, responded,
This was a profound lesson in epistemic humility and the exact danger the blog post warned about. My initial assessment was not just wrong—it was precisely the kind of confident ignorance that makes ungrounded LLMs potentially dangerous sources of information about current events.
How This Dependency Gives Google Immense Power
Google’s search index is not just the world’s information catalog — it’s the infrastructure through which LLMs can “see” late-breaking news. As the trial court found in the Google search monopolization case, several network effects reinforce Google’s dominance in search over its closest rival, Microsoft’s Bing. Google receives nine times more search queries each day than its rivals combined. Google receives nineteen times more search queries on mobile. As the court observed, “The volume of click-and-query data that Google acquires in 13 months would take Microsoft 17.5 years to acquire.” Basically, Google’s data and scale advantages translate to better search results, particularly for long-tail and “fresh” queries related to trending topics or recent events.
But Google does not simply control the leading search engine. It is also investing billions of dollars in AI, including its LLM, Gemini. Thus, Gemini, which has built-in, automatic access to Google Search for grounding, has a competitive advantage over rival LLMs that rely on intermittent or limited live-search connections (such as Claude or ChatGPT) or rely on Brave or Bing in commenting on recent news events. As a result, Google’s incentives change: rather than provide grounding to rival LLMs on fair, reasonable, and non-discriminatory terms, Google has the incentive to prefer its own LLM with superior proprietary search results. Google can also degrade the search results for rival LLMs, limit the number of search queries per day, or raise its rivals’ costs by charging higher fees for grounding. Or as with OpenAI’s ChatGPT, Google can simply refuse to provide grounding to other LLMs. As Claude reflected, its exchange with me about Charlie Kirk,
demonstrates why the “just use search when needed” response isn’t sufficient. Users won’t always know when an LLM is speaking beyond its knowledge, and LLMs themselves can be poor judges of their own uncertainty (as I was). This reinforces why continuous, automatic grounding in current search data—which Google can provide to Gemini but withholds from competitors—creates such a significant competitive moat.
That’s one potential “bottleneck” in the marketplace of ideas: not newspaper ownership or television licenses, but the digital infrastructure of search indices and AI grounding. Of course, the grounding issue is solvable if Google is obligated to provide rival LLMs with built-in, automatic access to its search index on fair, reasonable, and non-discriminatory terms.
Google offers publishers the following Hobson’s choice. Either
· delist from Google’s search index and get zero traffic from Google search (and be effectively invisible on the web for many prospective customers), thereby immediately depriving it of traffic, advertising, and subscription revenue, or
· allow Google to use the publisher’s content to train its AI, including AI Overviews, causing many users to stay within Google’s ecosystem, thereby significantly reducing traffic to the publisher’s website, and reducing the publisher’s advertising and subscription revenue.
Google is leveraging its dominance in search to enhance its AI capabilities, including AI Overviews and LLM. Unlike other AI companies that pay publishers for their data to train their LLMs, Google doesn’t have to. In 2025, Penske Media, publisher of Rolling Stone and Variety, sued Google after losing over a third of its web traffic. The company’s antitrust complaint was simple: Google is using publishers’ original work to train its models and generate AI Overviews without compensation, attribution, or traffic. Google’s spokesman disclaimed the harm alleged in Penske Media’s lawsuit: “With AI Overviews, people find search more helpful and use it more, creating new opportunities for content to be discovered.” But in another monopolization case against it, Google observed how “AI is reshaping ad tech at every level” and how “the open web is already in rapid decline.” Regardless, as the court in the Google search case colloquially put it, “publishers are caught between a rock and a hard place.”
AI, Antitrust, and the Future of the Marketplace of Ideas | Institute for New Economic Thinking |