From the Mailbag: The Google Dilemma and China

Long-time Laboratorium reader Ken Liu writes in with the following:

I read your piece on The Google Dilemma, and I wanted to raise several points that you might or might not have considered. The intent isn’t to argue, but in general, I’ve found the discussion of censorship in China in America to be lacking, and I hope you could enrich the discussion.

First, the term “censorship” is loaded. Businesses follow censorship laws (or the pressure to censor that isn’t strictly legally required) in every country, including the Untied States (see the recent agreements by major ISPs to remove child pornography newsgroups). This point is often abused by those who defend China’s practice — but they do have a point. The point isn’t so much whether human rights are being respected; rather, it’s about the right balance between insisting on your principles — that you, as a Western company, would like to display search results and images that you see as legitimate responses to a query, subject to your own biases and tastes — versus the principles of the host country — which may be determined through undemocratic processes (as they are in China), or by non-liberal cultures (as they are in Saudi Arabia and other places), or through democratic processes in liberal cultures with certain blind spots.

Often, people speak of “censorship” without acknowledging that there is a background of this kind of contest going on. Fundamentally, the assumption behind the intuition that “censorship” is bad is that other people ought to see things just the way we see them. And that may be a laudable goal or a bad goal. But it’s not self-evident. To say that someone else is censoring is to say that they are not seeing things the way we want to see them, and whether the views of those others are “authentic” (in the sense of being liberal or democratic) may be relevant or not to the determination, but again it’s not self-evident.

Second, there’s the question of whether it’s a net good thing for Google/Yahoo/etc. to do business in China while following Chinese rules about censorship, as I’ve defined it above. Supposing that we find it a laudable goal to impose our views on the Chinese — a question that we will set aside for now — it’s hard to see why Google’s method is bad. Google’s principle fundamentally is that more information is better than less information, so it’s better for Google to be present in China than for it not to be in China. I find this argument very convincing. The presence of a Western company spreading Western values in China, no matter how censored, is a net win for the West. In so far as our goal is to topple the Chinese government and to introduce instability into China, it’s better to have Google in there spreading information and alerting people to the fact that things are being censored, than otherwise. And as I have elaborated elsewhere,’s censorship is quite different from what the Chinese companies and users themselves engage in.

Third, there’s the question of the specific example of Tiananmen. You view the Chinese search results as “censored,” but actually, every time I do a search on Tiananmen on Google I feel I’m watching the result of a Google bomb. To me, the iconic image of Tiananmen is the one where Chairman Mao stood up and declared the founding of the People’s Republic. That moment is the dividing line between pre-modern China and modern China, and for many Chinese it is an image that is much more potent than the images of the protests. (Alternatively, other Chinese might think of Tiananmen as a complex symbol of some six hundred years of late Imperial Chinese history; as a stand-in for the debates over Chinese ethnicity and identity (Manchu, Mongol, Tibetan, Hui, or Han?); as a public forum in which the greatest events of China in the last sixty years have played out; as a place in which the conflicting impulses of Chinese nativist pride and confusion and shock at the importation of the foreign are still being played out; etc.) Taken in context, the protests of 1989 — which dominates Western consciousness purely by accident of timing — are simply one other entry in a long history of Chinese student movements against governments which produced little immediate concrete results, but cumulatively, will lead to significant systematic changes. It is neither the most important student movement in China nor even the most significant event in Tiananmen Square itself. For most Chinese, Tiananmen is a central backdrop to the birth of modern China, the scene of countless official processions — the good and the bad, a symbol with a complex history that is entwined with the Chinese identity, and somewhere in that line, the protests of 1989 are but a blip in that long history. When a search engine defines this symbol for Western audiences as simply a photograph of an unknown man standing in front of a column of tanks, then there is more than a bit of cultural domination and essentialism going on. The “uncensored” Google search results encapsulate, in a very simple image, the lack of understanding most Westerners have of China, of Tiananmen, and of the protests of 1989.

Now, a simple answer might be: “That might all be true, but who cares? The job of a search engine is to reflect the consensus judgment of the Web as a whole on a matter. If most of the Web view Tiananmen as simply a symbol of democratic protests against authoritarian China, then the fact that the Chinese themselves see Tiananmen differently, as a symbol of more than 400 years of Chinese history, is irrelevant.” But that’s just ducking the problem. The fact that most Chinese do not write in English and are not free to write what they think so that they can counterbalance the biases of the West against their country and culture means that the Web is itself necessarily biased and dominated by Western ideas and beliefs. Before we simply say that the problem is Chinese censorship, we should acknowledge that the “uncensored” view Google gives us in the West is itself quite biased and “wrong” in its own way.

By way of reply, let me just say that if I had an extra hour in every day, I’d write an essay called “There’s No Such Thing As an Objective Search Result, and It’s a Good Thing, Too.” Every search ranking is “biased” in the Eric Goldman sense: The search engine’s programmers necessarily make editorial judgments about what their users most want to see. (Even an alphabetical list of every page on the Web favors some sites and disfavors others.) The real question is what kinds of search engines we would want to have for a liberal society in which people hold diverse and opposing views about “good” and “bad” content—and for a liberal international order in which different societies have diverse and opposing views on these questions.

One answer is that diversity of individual judgments should be mirrored by diversity of search algorithms. We stand the best chance of helping every individual find the content she wants and needs if we give her a rich opportunity to pick and choose among competing search engines. The more we force search engines to hew to “objective” societal judgments about good and bad content, the more likely we are to impose majority views on minority searchers. So we should be as agnostic as we reasonably can abut what is “right” and “wrong” in search results, as a way of respecting the judgments of others with whom we disagree.

Another answer is that the inevitability (indeed desirability) of bias in the Goldman sense need not mean we throw up our hands about search engines that are biased in the Friedman-Nissenbaum sense: ones that “systematically and unfairly discriminate against certain individuals or groups of individuals in favor of others.” So far as possible, we ought not to let search engines be the instruments of unfair discrimination against the disempowered. I see at least two cases in which the toleration principle from above poses no obstacle to this antidiscrimination principle:

First, when there isn’t a diversity of search engine options, the toleration principle loses force. If everyone uses a single dominant search engine, the choice is between letting the search engine impose its values on users and imposing societal values on users. Of course, the first-best response would be to restore diversity to the search ecology, but a second-best (and ideally temporary) move would be to prevent those instances of discrimination that society as a whole considers unfair.

Second, when a search engine is dishonest with its users about its biases, we have grounds to object. Such an engine isn’t properly other-regarding and deprives its users of autonomy. A search engine that claims its rankings are entirely “objective” and then filters out results by hand is telling us that it doesn’t actually believe its own rhetoric about the rightness of its ranking principles. While the line is hard to draw, there’s also a point at which a complete lack of transparency becomes a form of dishonesty. A search engine that genuinely thinks certain forms of content are “bad” and unwanted by its users should be willing to admit in public that it holds these views.

I don’t mean these principles to be a direct response to Ken’s email. You’ll note, for example, that I raised the issue of liberal internationalism (how different states with different values are to coexist) and then said nothing at all about it. Think of my thoughts here, instead, as being where I’d start from in responding, if I had that extra hour.

Could you clarify what you mean by “a second-best (and ideally temporary) move would be to prevent those instances of discrimination that society as a whole considers unfair”? Are you suggesting that Google’s ubiquity brings with it a status like a public utility— i.e., that it would be subject to entity-specific regulation in the public interest? I’d have problems with this view (as there’s nothing preventing anyone from building a better search engine, unlike (for example) the need to use eminent domain for physical projects that amount to monopolies).

Here’s a thought experiment. Delete Microsoft, Yahoo, Ask, and AOL. Now suppose that Google starts deleting Jews from its search results. That’s unacceptable, and regulation would be necessary. The relevant questions are what degree of market share and what level of discrimination should trigger that kind of treatment.

As for barriers to entry, I agree that no one is preventing competitors from opening up, but the economics of search are getting increasingly unfavorable to new entrants. Just crawling a hundred billion web pages takes some serious server-juice, and doing sophisticated indexing on them takes a lot more. All of those are fixed costs, since you incur them before you answer a single search query. Is there hope from disruptive search technologies? Yes. Should we count on them? No.

James, I agree with a lot of what you write. Two observations:

  • would it be valuable to think about competition/diversity among search processes generally, as opposed to just among “search engines”? Even if Google consolidates the search engine space, I would argue that they still face meaningful competition among the many other ways that people can “search” for information.

  • I wonder to what extent experiments like Google’s “roll-your-own” algorithm or personalized search algorithms affect the analysis. They don’t affect certain types of biases Google might surreptitiously deploy (like systematically deleting or refusing to index content on a certain topic) but they seem to cure others.


With respect to your thought experiment, what if you deleted Google and left only Yahoo (or rather, an early version of Yahoo) behind? Does your gut reaction (on acceptability of the site’s decision to limit search results) vary depending on the degree to which the site relies on human-powered compilation of URLs into a directory, rather than using super-powerful ranking algorithms? Is it more acceptable to regulate the site’s speech just because they’ve become incredibly efficient at what they do?

James, isn’t Google’s own history a rebuttal to the notion that there are nigh-insurmountable barriers to entry in the search field? I seem to remember that fairly decent search engines (like Altavista) already existed when Google appeared. Granted, the amount of web content has exploded since then, but I doubt the field is less intimidating now to a sophisticated player than it was pre-Google to two Stanford grad students.

Really nice points from Ken, especially his observation about “Tiananmen” being Google bombed by the West. Imagine how we would feel, for instance, if “White House” always linked to Nixon’s or Clinton’s indiscretions!

Eric: Yes, and personalized search increases some measures of search diversity. The transparency and privacy problems it raises are significant, but solvable.

Tim: One of the central truths of Internet law is to be extremely careful before proposing anything that would prevent the use of automated algorithms or require human review. Many of the potential cases I’m most concerned with involve human intervention already (either in altering search results or in choosing the algorithm).

Steven: I used to think that. While the search-process market, as Eric would put it, is still open to disruptive innovation (e.g. from social recommendation), it’s a lot harder to be google today than it was then. PageRank, while brilliant, is computationally expensive enough to be a real bar-raiser. As for a “sophisticated player,” that’s a reason to worry about things like a Microsoft-Yahoo merger. The search market today has fewer players and much deeper pockets than it did in the Altavista days. I still have hope for and confidence in the search-process market, but we very much need a contingency plan.