Some Thoughts on Antitrust, Neutrality, and Universal Search


Cross-posted from the Antitrust and Competition Policy Blog

Update: I’ve written an extended version of this post for The Society for Computers and Law.

The heart of the gathering antitrust case against Google appears to be that it sometimes “manipulates” the order in which it presents search results, in order to promote its own services or to demote competitors. The argument has intuitive appeal in light of Google’s many representations that its rankings are calculated “automatically” and “objectively,” rather than reflecting “the beliefs and preferences of those who work at Google.” But as a footing for legal intervention, manipulation is shaky ground. The problem is that one cannot define “manipulation” without some principled conception of the baseline from which it is a deviation. To punish Google for being non-neutral, one must first define “neutral,” and this is a surprisingly difficult task.

In the first place, search engines exist to make distinctions among websites, so equality of outcome is the wrong goal. Nor is it possible to say, except in extremely rare cases (such as, perhaps, “4263 feet in meters”) what the objectively correct best search results are. The entire basis of search is that different users have different goals, and the entire basis of competition in search is that different search engines have different ways of identifying relevant content. Courts and regulators who attempt to substitute their own judgments of quality for a search engine’s are likely to do worse by its users.

Neutrality, then, must be a process value: even-handed treatment of all websites, whether they be the search engine’s friends or foes. Call this idea “impartiality.” (Tarleton Gillespie suggested the term to me in conversation.) The challenge for impartiality is that search engines are in the business of making distinctions among websites (Google alone makes hundreds of changes a year.)

A strong version of impartiality would be akin to Rawls’s veil of ignorance: algorithmic changes must be made without knowledge of which websites they will help and hurt. This is probably a bad idea. Consider the DecorMyEyes scam: an unethical glasses merchant deliberately sought out scathing reviews from furious former customers, because the attention qua attention boosted his search rank. Google responded with an algorithmic tweak specifically targeted at websites like his. Strong impartiality would break the feedback loops that let search engines find and fix their mistakes.

Instead, then, the anti-manipulation case hinges on a weaker form of impartiality, one that prohibits only those algorithmic changes that favor Google at the expense of its competitors. Here, however, it confronts one of the most difficult problems of high-technology antitrust: weighing pro-competitive justifications and anti-competitive harms in the design of complicated and rapidly changing products. Many self-serving innovations in search also have obvious user benefits.

One example is Google’s treatment of product-search sites like Foundem and Ciao. Google has admitted that it applies algorithmic penalties to price-comparison sites. This may sound like naked retaliation against competitors, but the sad truth is that most of these “competitors” are threats only to Google’s users, not to Google itself. There are some high-quality product-search sites, but also hundreds of me-too sites with interchangeable functionality and questionable graphic design. When users search for a product by its name, these me-too sites are trying to reintermediate a transaction that has very little need of them. Ranking penalties directed at this category share some of the pro-consumer justification of Google’s recent moves against webspam.

A slightly different practice is Google’s increasing use of what it calls Universal Search, in which it offers news, image, video, local, and other specialized search results on the main results page, intermingled with the classic “ten blue links.” Since Google has competition in all of these specialized areas, Universal Search favors Google’s own services over competitors’. Universal Search is an obvious departure from neutrality, whatever your baseline—but is it bad for consumers? The inclusion of maps and local results is an overwhelming positive: it saves users a click and helps them get the address they’re looking for more directly. Other integrations, such as Google’s attempts to promote its Google+ social network by integrating social results, are more ambiguous. Some integration rather than none is almost certainly the best overall design, and any attempt to draw a line defining which integration is permissible will raise sharp questions about regulatory competence.

Some observers have suggested not that Google be prohibited from offering Universal Search, but that it be required to modularize the components, so that users could choose which source of news results, map results, and so on would be included. This idea is structurally elegant, but in-house integration also has important pragmatic benefits. Google and Bing don’t just decide which map results to show, they also decide when to show map results, and what the likely quality of any given map result is compared with other possible results. These comparative quality assessments don’t work with third-party plugin services.

It makes sense for general-purpose search engines to turn their expertise as well to specialized search. Once they do, it makes sense for them to return their own specialized results alongside their general-purpose results. And once they do that, it also makes sense for them to invite users to click through to their specialized subsites to explore the specialized results in more depth. All of these moves are so immediately beneficial to users that regulators concerned about Universal Search should tread with great caution.

For more on these issues, see my papers Some Skepticism About Search Neutrality, The Google Dilemma, and The Structure of Search Engine Law.