Google’s Algorithmic Ideology


This is a scholarly outtake: a passage that no longer fits with the article I wrote it for.

Google was founded by a pair of highly intelligent, Montessori-educated, profoundly optimistic graduate students in computer science who happened also to be extraordinarily lucky in having the right idea at the right time while surrounded by the right people. The company they built bears these personality traits deep in its distributed organizational brain; they are reflected in its hiring decisions, corporate slogans, management processes, technical designs, and even its aesthetics.

Algorithms are the essence of the Google weltanschauung. The company’s success is traceable directly to PageRank, one of the most elegant algorithmic discoveries of the last generation. Five algorithmic values pervade almost everything the company does: automation, scalability, data, information maximalism, and arrogance. They constitute Google’s ideology: the system of ideas that animate its decisions, large and small.

Automation: An algorithm is perfectly consistent, free of human biases and mistakes. Google has long disdained manual curation and relied instead on algorithms to identify interesting, important, or dangerous items. The company offers support forums for users to discuss issues with each other, but a regular theme in websites’ Google horror stories is the difficulty of bringing an issue to Google’s attention and getting a meaningful response. Even its hiring decisions have depended on “a series of formulas created by Google’s mathematicians that calculate a score — from zero to 100 — meant to predict how well a person will fit into its chaotic and competitive culture.” Driverless cars are an extreme example of the trend.

Scalability Unsurprisingly for a company whose core product requires making a complete copy of the entire web on a frequent basis, Google is obsessed with scalability. The company does nothing unless it works not just for a few hundred data points but for millions, billions, trillions, or more. It has pioneered new techniques for building massive high-reliability data centers that use of hundreds of thousands of computers. It disdains incremental fixes, preferring to solve the general case whenever possible. This preference for scalability is another reason it is so notoriously difficult to reach actual Google employees for customer service issues—humans don’t scale.

Data: Google is profoundly data-driven. Its access to mountains of data about webpages and searches has given it a strategic edge in understanding the problems of search. It has repeatedly demonstrated the “unreasonable effectiveness of data”: the reason Google’s translation tools are so good is not because the company is dramatically better than generations of researchers at modeling natural language, but rather because it has access to the world’s largest corpus of actual language usage data. Proposed changes to search algorithms are heavily informed by empirical data. At times, this preference for the quantifiable tilts over into absurdity: it picked a toolbar color by testing 41 different shades of blue and seeing which one drew the most user clicks.

Information Maximalism: Google believes that the world is better off when people have access to more information. This is not a belief in overloading people with information, quite the opposite. Google Search and other products help users isolate the information they want and need. But this approach works best when they have as much information as possible to select from. Google frequently promotes the conversion of information to accessible digital forms—books, art, map data, legal documents, and so on—as a way of making search more useful. The obvious policy consequence of this vacuum-cleaner attitude towards information is that Google tends to fall on the “information wants to be free” side of debates over Internet filtering, copyright policy, and similar issues—and also why it has had such persistent privacy problems.

Arrogance: Larry and Sergey are very smart computer scientists who became ludicrously wealthy by hiring other very smart computer scientists. That initial experience was imprinted on the company they founded: it has the confidence to believe that every problem “from book digitization to freedom of expression” can be solved by “talented computer scientists and engineers” bearing “scientific, heavily quantitative methods.” It has rooms full of smart people who are used to thinking of themselves as the smartest people in rooms full of smart people. The company’s two strategic quagmires—Android and Google+—both reflect the belief that its raw brainpower and allegedly unique corporate culture amount to a Green Lantern Corps power ring, capable of entering any field and dominating any market. But while an algorithm can be proven correct—companies cannot.


What was the article? It sounds very interesting.

Note, I’d be careful about believing the propaganda. Sentences like “An algorithm is perfectly consistent, free of human biases and mistakes.”, are fraught with politics as to who is claiming that, and why. That is, who really believes it, and who is putting it forth as a PR pitch. Google would hardly be the first company to have an official technological-determinism reply to complicated political issues.