This is an archive page. What you are looking at was posted sometime between 2000 and 2014. For more recent material, see the main blog at http://laboratorium.net
The Laboratorium
April 2013
From British poet, historian, and Laboratorium commenter Gillian Spraggs comes this detailed discussion of the implementation of France’s new orphan books legislation. It is the best source I am aware of in the English language for a ground-level view of the French legislation in action. She wrote extraordinarily helpful analyses of the Google Books settlement for U.K. authors and she finds many of the same troubling features in the new French scheme.
In particular, it appears that the metadata and search interface to the database of putative orphans are both atrocious. Spraggs’s post details more than a dozen books by foreign authors that are not plausibly orphan works, but were or are in the database nonetheless. This, it will be recalled, was a significant problem with HathiTrust’s abortive Orphan Works Project. Every time an orphan works trial flunks its basic due diligence, it undercuts the case for orphan works reform; just as the criminal antics of Righthaven and Prenda Law undercut the case for copyright enforcement against individual downloaders.
Catherine Rampell, Who Says New York Is Not Affordable?, N.Y. Times, Apr. 28, 2013, at MM22:
Of course, not everything that wealthy New Yorkers spend money on is cheaper here. Housing, after all, is absurdly expensive, even for the rich. Complex zoning regulations and limited land make it all but impossible for supply to grow alongside demand. … What’s happening in New York is just part of a national shift. Highly paid, college-educated people are increasingly clustering in the college-graduate-dense, high-amenity cities where they get good deals on the stuff they like, while low-skilled people are increasingly flowing out to cheaper places with a worse quality of life.
Why, it’s almost as though cities are engaged in … exclusionary zoning.
Nah, couldn’t be. Everyone knows exclusionary zoning is a suburban phenomenon.
The problem has long entertained lawyers, particularly those in whom a speculative turn of mind is allied with some proficiency in mathematics. Several exceedingly complex all-purpose theoretical solutions have been proposed. These have been ignored by the courts. A judge who finds himself face to face with a circular priority system typically reacts in the manner of a bull who has been goaded by the picadors: he paws the ground and roars with rage. The spectator can only sympathize with judge and bull.
Grant Gilmore, Circular Priority Systems, 71 Yale L.J. 53 (1961)
I’ve posted a draft of my latest article, Speech Engines, forthcoming in the Minnesota Law Review. I started thinking hard about search engines a decade ago, when I blogged about the Search King lawsuit instead of studying for my first semester law-school exams. It was apparent to me then that Google’s power to promote and demote sites in its search results was both immensely valuable and immensely dangerous, but I wasn’t sure how the legal system should respond. Since then, I’ve written six papers on search engines. The first five were either failed attempts at a general theory, or, if you want to be more charitable, preliminary assays to think through the issues.
But now I think I’ve got it: a theory of how we should think about search engines, and how the legal system should treat them. The goal of search engines, and the goal of search engine law, are to help users find what they’re looking for. Search engines are advisors; law should ensure that users have access to search engines and that those search engines are loyal to users. Search results are opinions about what users will find relevant. Those search results can be actionable when they are given in bad faith, that is, when they don’t reflect the search engine’s actual opinions about relevance. The Federal Trade Commission was probably right to drop its search-bias charges against Google, but should have insisted on greater transparency going forward.
This is, I daresay, a radically moderate take on Google. I reject Google’s story of search, on which search results are purely subjective and not susceptible to legal oversight, and the possibility of competition from other search engines suffices to keep it in check. But I also reject the story told by Google’s numerous enemies, on which the government can and should bring search results with an objective standard of fairness and neutrality. There is a middle ground between the two; indeed, it follows naturally from putting search users at the center of the story.
Here’s the abstract:
Academic and regulatory debates about Google are dominated by two opposing theories of what search engines are and how law should treat them. Some describe search engines as passive, neutral conduits for websites’ speech; others describe them as active, opinionated editors: speakers in their own right. The conduit and editor theories give dramatically different policy prescriptions in areas ranging from antitrust to copyright. But they both systematically discount search users’ agency, regarding users merely as passive audiences.
A better theory is that search engines are not primarily conduits or editors, but advisors. They help users achieve their diverse and individualized information goals by sorting through the unimaginable scale and chaos of the Internet. Search users are active listeners, afirmatively seeking out the speech they wish to receive. Search engine law can help them by ensuring two things: access to high-quality search engines, and loyalty from those search engines.
The advisor theory yields fresh insights into long-running disputes about Google. It suggests, for example, a new approach to deciding when Google should be liable for giving a website the “wrong” ranking. Users’ goals are too subjective for there to be an absolute standard of correct and incorrect rankings; different search engines necessarily assess relevance differently. But users are also entitled to complain when a search engine deliberately misleads them about its own relevance assessments. The result is a sensible, workable compromise between the conduit and editor theories.
This is a draft. The article itself won’t be published until next year, which means I have plenty of time to revise and refine the arguments. I would greatly appreciate any comments or suggestions you might have.
ReDigi, Digital First Sale … and Star Trek
My latest column for Publishers Weekly is up. In it, I look at the ReDigi decision holding that an online marketplace for used iTunes music files violates copyright law. The judge dropped a Star Trek reference in a footnote, which I use as the starting point for my riff on first sale.
The Copyright Act was drafted with two scenarios in mind: one, we could call the “transporter” or the “post office,” where someone takes a book that’s here and moves it over there, out of one person’s possession and into another’s. Copyright calls this a “distribution,” and the first sale defense applies to it. The total number of copies is unchanged: there was one before, and there’s one after.
The other offline scenario, which we can call the “replicator” or the “printing press,” takes an old copy of a book and makes a new copy in the same place. Copyright calls this a “reproduction,” and it’s not subject to first sale.The total number of copies increases: there was one before, and there are two after.
But online, a download is a bizarre hybrid of the two. There’s an old copy here on my computer, and once I send you the bits, there’s also a new copy there on your computer. The Internet therefore is something of a “transporticator” that creates a perfect replica of Kirk down on the planet, while also leaving the original Kirk free to roam the Enterprise.
It gets even weirder from there, including guest appearances by Derek Parfit and Evil Spock.