Some Skepticism About Search Neutrality

I’m happy to announce that my latest piece, Some Skepticism About Search Neutrality, has just been published in The Next Digital Decade, a new volume of essays about Internet policy. As the title suggests, it’s a contribution to the discussion over “search neutrality,” the idea that search engines should be legally required to exercise some form of even-handed treatment of the websites they rank. It’s been a major topic in the news recently, particularly with the EU antitrust investigation. Websites like Foundem and SourceTool have been joined by independent critics like Consumer Watchdog and a growing number of academics. They share a sense that dominant search engines, especially Google, have too much power to be allowed to be anything but neutral.

Having now read and thought through the academic and popular arguments for search neutrality, I’m skeptical. The problem is that no one has offered a good definition of what it would mean for a search engine to be truly “neutral.” In working through the search neutrality literature, I came across eight different possible meanings. Not one of them works. Some, like equality, the idea that search engines shouldn’t differentiate between websites, are simply incoherent: people use search engines because they make useful distinctions between websites. Others, like relevance, the idea that search engines should try to maximize user satisfaction, are too vague to be meaningfully enforceable by regulators. Worst of all are the proposals, like objectivity, the idea that search engines should return only “good” search results, that would dictate what search users are and aren’t allowed to see, rather than letting them choose for themselves.

Search neutrality may have noble goals, but it could do a great deal of harm to the Internet. Spammers and black-hat search-engine-optimizers would love it if Google were required to use a uniform, fully transparent algorithm. Low-quality websites would love to cry “search neutrality” any time they lose in the rankings to better websites that users like more. In both cases, search engine users would be the real losers.

This isn’t an across-the-board defense of search engines. They raise other, legitimate issues: antitrust, copyright, and stealth marketing, to name just a few. But I’m unconvinced that search neutrality is one of them. It takes attention away from the real issues at stake; it substitutes unhelpful and confused tests for careful analysis under better-established bodies of law.

The book is available either as a free download or in hard copy. I’ve put my chapter online as a PDF with my usual Creative Commons license. I’ve also prepared an HTML version with a slightly updated bibliography. I hope you’ll read one of them and join the conversation. (At the very least, find out why I start by quoting Sergey Brin, Jonathan Edwards, and Voltaire.) As always, I value your thoughts and comments.

I’m relatively new to the search neutrality debate, but the little I’ve been exposed to is pushing me towards the belief that it might be better in this case to adapt the user, not the tool. I’m inclined to say that it is the search engine’s prerogative to choose the formula it uses to produce search results. One would hope that the search engine companies would apply what sometimes appears as common sense ethics to their policies, but it’s going to be a long, hard road to 1) decide upon what, exactly, is ethical search neutrality, and 2) how it should be implemented (to say nothing of whether it is even ethical to mandate.) In light of this, I think the solution lies in the education of the search engine user. I remember being taught, briefly, in grade school how to phrase my search terms to produce the best finds. Can this lesson not be expanded upon, both in content and in students reached? Cannot people be educated in how the results of a search are chosen, so they can use the tool better? Of course, this requires transparency from the search engine companies, which is another long, hard fight. And implementation of this sort of education would be difficult, and probably would only result in more/better information on the web, which might not solve any problems (you can lead a horse to water…). Am I completely off-base? Is this something that’s already been explored?

Carmen, you’re describing a problem of media literacy. As people shift from reading newspapers and watching TV to using search engines, they also need to learn how to interpret these different media. Where does this information come from, who shapes it, with what agenda? That’s a very important conversation and an important, ongoing project in education.

Looks good - had a skim, will try and take the whole thing in later.

One thing I immediately wondered was whether transparency couldn’t be addressed by requiring search engines to disclose their algorithms after a suitable period of time? If the search engine optimisation arms race is half as frenetic as everyone says it is, then older data about the algorithm shouldn’t be of much use to spammers.

You could probably combine this with the regulatory opacity proposal - an ever larger body of auditors gets allowed access to the algorithm as time passes, eventually culminating in full public disclosure.

“And while power itself may not be an evil, abuse of power is.”— Page 436

You might remember.

Are search engine algorithms … patented. Or are they kept as a sort of , commercial ‘trade’ secret?

Search engines use both patent and trade secret for their algorithms. See pages 48-50 of my article The Structure of Search Engine Law.


Secrecy at Google, in particular, is almost a way of life.

In the lead up to the great Exhibition of 1851 major changes to patent laws were pushed through in the UK. The intention of these changes to UK law was to protect middle class sole-trader inventors from commercial exploitation by bigger commercial groups . The reason for this public protection was so that these inventors would make their inventions publicly available for copying for social improvement: Education.

Publicly available Patents (and Copyright??) do not look like a vital core business model for Google; secrecy is a way of life .

A ironic side to free for all ,without public protection of the rights of individual creatives , is that it might end in more privately controlled archives of darkness….. though how would you know?

“Without search neutrality rules to constrain Google’s competitive advantage, we may be heading toward a bleakly uniform world of Google everything - Google Travel, Google Finance, Google Insurance, Google Property, Google Telecoms and, of course, Google Books.”— British MP Graham Jones as quoted by The Register, MP: Googlepoly hurts British business

Some press:

Specifically, it chose to highlight the arguments of James Grimmelmann, a professor at New York Law School, who wrote an essay on search neutrality that concluded “A good search engine is more exquisitely sensitive to a user’s interests than any other communications technology.” (emphasis author’s)

Post a comment

You can use HTML style tags or Markdown.

Comment Preview: