Deeply Out of Date


As part of my research for one tiny corner of my work in progress on search engines, I’ve been reading deep linking cases and commentary this afternoon. Per Wikipedia, Deep linking is “the act of placing on a Web page a hyperlink that points to a specific page or image within another website, as opposed to that website’s main or home page.” Search engines care about deep linking because if content providers have a right to prohibit it, they can use that right, effectively, to prevent search engines from listing their content. Site owners would like the control over users and branding that a deep linking right would give them; open access advocates claim that that right goes against the open linking culture of the Internet and would stifle competition and innovation. It all seems like significant stuff.

I’m finding, interestingly, that the American legal world has more or less forgotten about deep linking. After a few high-profile cases in the late 1990s and early 2000s, and a veritable eruption of scholarship, deep linking issues have basically dropped off the radar screen. There have been some student notes on the issue in the last two-three years, but otherwise, few people are making a stink one way or the other.

This is interesting, first of all, in that the issue is very much not dead elsewhere in the world. Deep linking cases have been actively litigated in Denmark and India within the past year, with the respective courts reaching opposite results. Given that the questions involved have never really been definitively resolved in this country, the relative American silence is not for lack of legal opportunity.

I also find it interesting in that the relative age of the scholarship here is telling, when you look around at what people are actually doing about deep linking. On the one hand, there are entire business models being built around web applications for which deep linking would be devastating. To take one example, chosen more or less at random, consider SendSpace, which allows you to upload files and send the links to friends. They make their money showing advertising. That means going through a full page, not a page artfully stripped to have just the link to the file. Four fifths of the existing deep linking scholarship thinks in terms of content providers protecting their content from direct competitors or their brand, from dilution, not quite in terms of providers making sure that user eyeballs are properly monetized.

The range of anti-deep linking technologies available today is also considerably more extensive than formerly. After an age of simplification, during which designers abandoned frames and opted for simpler designs with more standardized layouts, we’re going back into page designs that have non-trivial state and which can’t easily be wrapped up in a single URL. Dynamic, AJAXy pages are by nature resistant to deep linking. You can’t easily create a deep link to anything that doesn’t have a permanent link in the first place.

Moreover, the techies have plenty of fairly simple ways to nullify deep links. A basic paywall—only registered users beyond this point—forces users to go through your processes. Even if the crawlers can index your content, when everyone has to register to see what you have, you control the horizontal and the vertical. (Well, modulo BugMeNot, that is, though I wonder how long it will last.)

Or, in one of my favorite tricks, site owners can simply check referrers. Pages and images should only be loaded from particular other pages on your own site. You can either block requests that don’t go through proper channels, display a snide message instead, or show something that the user really didn’t want to see. This guy did it with style. When Fuddruckers’ deep linked to a Burger Time clone flash game he’d made, he redirected the traffic to a page with pictures of a slaughterhouse. (For fun, ask yourself about possible theories liability here, being sure not to overlook the relevance of the probably unauthorized clone of Burger Time.)

The glib answer, then, would be that site owners don’t need deep linking liability because they have other effective routes to the same end (both legal and technical) and that scholars have sensibly ignored a now-irrelevant legal issue. This may or may not be the case, but I think it has, at the least, taken some urgency out of the deep linking wars.

This is all by way of an aside. I don’t at the moment have a good hook on which to hang these thoughts in my search engines piece. I just thought I’d get out in the open something that I found a bit unusual in looking through the case law.


Patricia Bellia makes the point about checking referrers in a 2004 article about cyberproperty and trespass. It’s not really a deep linking article, but she takes a brief and sharp digression into the topic when discussing the Ticketmaster case, which mixed aspects of cyberproperty and deep linking.