Over in the visual artists’ case, the various parties have asked for a third extension of time for Google to file its reply. This one pushes the due date back to August 9. Denny Chin, United States Circuit Judge, signed and approved it. In other words, the status quo continues.
According to the Wall Street Journal, the Internet Archive and three partner libraries are launching new programs to enable “lending” of e-books. Find a book listed on Open Library and download it to your computer; the downloads come with DRM that expires after two weeks. The net result is a two-week “loan” of the digital copy.
Some of this is old(ish) hat. Open Library has already been helping readers find physical copies at libraries near them, via redirection to WorldCat’s directories. Overdrive already helps local libraries acquire the licenses and technology to “lend” e-books with the permission of the copyright holders. And the Internet Archive itself has been providing free downloads of scanned public-domain books for years.
The new twist is that, for the first time, the Internet Archive will apply the “lending” treatment to at least some books that are still in copyright. Here’s the WSJ’s summary:
And in a first, participants including the Boston Public Library and the Marine Biological Laboratory will also contribute scans of a few hundred older books that are still in copyright, but no longer sold commercially. …
The Internet Archive’s scanning effort hopes to extend digital libraries far beyond the sorts of contemporary e-books sold by Overdrive. The San Francisco-based library has been digitizing older books using 20 scanning centers around the world. Until now, those scans were mostly used to extend access to public domain works, or to give digital access to in-copyright books to the visually impaired.
“We’re trying to build an integrated digital lending library of anything that is available anywhere, where you can go and find not just information about books, but also find the books themselves and borrow them,” said Brewster Kahle, the founder and digital librarian of the Internet Archive.
With its latest project, the organization is making inroads into the idea of loaning in-copyright books to the masses. Only one person at a time will be allowed to check out a digital copy of an in-copyright book for two weeks. While on loan, the physical copy of the book won’t be loaned, due to copyright restrictions. …
Mr. Kahle said, “We’re just trying to do what libraries have always done.”
Having to receive prior permission from a copyright owner in order to scan a book is onerous, said Mr. Blake, of the Boston Library. “If you own a physical copy of something, you should be able to loan it out. We don’t think we’re going to be disturbing the market value of these items.”
This is an interesting development. The Internet Archive has been close-lipped about the precise legal basis for what they’re doing. At some of these in-copyright books are actually being loaned with permission, like Stewart Brand’s The Media Lab. Perhaps that’s the case with all 187, although the comments in the WSJ article would seem to suggest not. The Archive could be making a pragmatic risk analysis on a book-by-book basis, including only books for which it believes the chance of ever actually being sued is negligible, and thinks it can keep the number of mistakes small enough to avoid serious financial danger.
Or, most intriguingly, perhaps the Archive believes it could win a copyright lawsuit. First sale probably doesn’t work on existing precedents, since each electronic copy on a user’s computer counts as a new “copy” for copyright purposes. Neither do the library exceptions, which are narrow and quite technical. This new lending program is certainly consistent with the spirit of both provisions, and there’s a powerful argument that in a digital age, they ought to be amended to explicitly allow this kind of lending. But as written, they probably don’t authorize digital lending.
This leads me to think that the most natural argument would be fair use. The argument here would likely center on the Archive’s nonprofit purpose, the negligible harm to the market for some long-out-of-print books (quite possibly including some orphan works), and the nearby public policies of first sale and library exceptions. The natural counter-argument, however, is that distributing complete copies of books for readers to consume is so close to the core of copyright’s rights and goals that fair use simply cannot stretch that fair. These are non-transformative, substitutive, complete copies of expressive works—so while the Archive would have an argument, the fair use factors arguably tip 4-0 against it. Should it win, it would be a revolution in fair use caselaw. A good revolution, for some, but a revolution nonetheless.
I added a “GBS” tag to the title because the Archive’s actions have implications, both intellectual and practical, for the pending Google Books settlement. For one, should the Archive prove able (legally or in practice) to lend out these books, that would be a significant step in the orphan works debate—a demonstration that there’s more wiggle room under the Copyright Act than many have thought. For another, this confirms the Archive’s role as a kind of Google competitor. A non-profit one, to be sure—something that could place them on different litigation footing in a variety of ways—but still, the new lending program means that there are now two entities trying to make some kind of a play in the digital distribution of in-copyright books without individual permission from the copyright owner.
Ironically, the Archive’s gambit could help Google gain settlement approval. It was the Archive’s lawyer who made the strongest argument at the fairness hearing that the settlement’s core problem is that it works on an opt-out basis. I wouldn’t be shocked if Google brought up the Archive’s book-lending program at some point as a way of trying to discredit that argument. Also, by scanning books and distributing complete copies of them to the public, the Archive makes more credible the plaintiffs’ arguments that the Authors Guild case has always been about the complete books, not just indexing and snippets—which could undercut the objection that the settlement authorizes conduct not at issue in the underlying lawsuit.
As always, Gary Price at Resource Shelf has plenty of links with further details.
As reported round the blogosphere, ASCAP (which collects public performance royalties on behalf of songwriters) sent around a fundraising email for its lobbying arm:
Creative Commons, Public Knowledge, Electronic Frontier Foundation and technology companies with deep pockets are mobilizing to promote “Copyleft” in order to undermine our “Copyright.” They say they are advocates of consumer rights, but the truth is these groups simply do not want to pay for the use of our music. Their mission is to spread the word that our music should be free.
The mentions of PK and the EFF come as no surprise. These groups have been arguing that copyright has become unbalanced and should be more limited (whether legislatively or judicially). “Music should be free” is an unfair distortion of their position, but it’s easy to see why some songwriters and their professional associations might feel threatened by any suggestion that copyright goes too far, rather than not far enough. But Creative Commons? An organization that promotes voluntary licensing within the existing framework of copyright? What’s it doing on ASCAP’s hit list?
Part of the answer, I suggested last year in The Ethical Visions of Copyright Law, is that Creative Commons actually occupies a rhetorical ambiguous place in the debates over copyright law and policy. Here’s the key section, at pages 2034-35:
The heart of the issue, then, is that we can read “sharing” either as being allied with the default ethical vision or as allied with the free-as-in-freedom critique of that vision. The default ethical vision seizes on sharing’s generosity, its praise of voluntary engagement, and its refusal to condemn. The critique, on the other hand, points to sharing’s nonmonetary nature and its implicit rebuke of nonsharers. Both readings represent plausible, consistent extensions of sharing’s logic.
This fact has important consequences. It explains some of the (otherwise surprising) unease around the Creative Commons project and why people have criticized it from both sides. We saw above how some critics believe it lacks an agenda and needs one, but there are also people who see in it a hidden agenda for abolishing copyright. These two critiques can’t both be right. They can, however, both sound plausible—because Creative Commons’ “sharing” rhetoric is so ambiguous.
This ambiguity also provides an explanation for some (otherwise puzzling) critiques of Creative Commons that seem to veer over the line into saying that authors who use Creative Commons licenses are doing something wrong. A Billboard article from 2005 quotes an AIDS-stricken musician as saying he wouldn’t have been able to afford his medication if he had used a Creative Commons license: “No one should let artists give up their rights.” This particular critique was factually misinformed,189 but there is an important intuition underlying it.
If Creative Commons is part of a broader critique of the default ethical vision, then it makes a set of ethical claims that authors who write for money and sell their works are behaving unethically. For people who are part of that system—who see themselves as acting ethically when they sell their works—this critique is either incomprehensible, crazy, or profoundly dangerous. Just as the RIAA warns kids that “free” music must be illegal and unethical, there’s a hint of an idea here that authors who choose Creative Commons are betraying other authors and their audiences—they aren’t showing the audience enough respect to give them something worth paying for.
To summarize, there’s a significant ambiguity in Creative Commons’ response to the copyright system. It could be saying (or could be seen to say) that the system is out of balance because authors have exclusive rights they don’t need and don’t want to use. It could also be saying (or could be seen to say) that the system is out of balance because authors have exclusive rights they shouldn’t have and shouldn’t be allowed to use. In either frame, its licensing strategy is a natural response designed to encourage a healthier balance. But the latter frame, let us be clear, is a challenge to the default ethical vision of copyright itself, not merely a critique of authorial behavior made from within that vision.
From Robert P. Schuwerk, Future Class Actions, 39 Baylor L. Rev. 63, 207 n.747 (1987):
747 See cases cited supra note 747.
Richard Marcus, Reassessing the Magnetic Pull of Megacases on Procedure, 51 DePaul L. Rev. 457, 458–64 (2001), identifies three main types of “megalitigation”: large-scale commercial cases, public-law cases, and mass tort cases. The Google Books case is arguably all three.
If, like me, you are spending some of your summer reading law review articles, you may find the following translations helpful.
For “it is likely that,” read “I would like if it were true that …”
For “has the greatest institutional competence,” read “is currently staffed by people who agree with me.”
For “excluding transaction costs,” read “excluding reality.”
For “including transaction costs,” read “including a fudge factor.”
For “we,” read “I.”
Actual headline from BNA’s United States Law Week:
Contractor Tied to Exploding FEMA Trailer Must Stay in State Court to Face Tort Suit
Privacy As Product Safety—which I previously blogged about here—has been published in the Widener Law Journal. The changes since the version I posted in February are substantively minor, but include a great many cosmetic, stylistic, citational, and other editorial fixes. As usual, it’s available for reuse under a Creative Commons license.
I was on Friday’s This Week in Law with Denise Howell, Evan Brown, and Doug Isenberg. We talked about the Supreme Court’s decision in Quon, Google Books, and the copyright implications of hard drives in copy machines. I really must learn how to keep my eyes focused on the camera when I do videocasts.
Protection against fraud and what some call paternalism are inseparable in practice.
That’s from Rebecca Tushnet, It Depends on What the Meaning of “False” Is: Falsity and Misleadingness in Commercial Speech Doctrine, 41 Loyola L.A. L Rev. 227 (2007)
Einer Elhauge’s Why the Google Books Settlement Is Procompetitive, previously mentioned on this site, has been published in the Journal of Legal Analysis. The JLA is a peer-reviewed open-access journal with high editorial standards; it’s a good home for this sort of work. Particularly after the JLA’s editing, Elhauge’s paper remains the definitive pro-settlement antitrust analysis, better and more detailed than the parties’ own submissions to the court. Here is Elhauge’s final abstract:
Although the Google Books Settlement has been criticized as anticompetitive, I conclude that this critique is mistaken. For out-of-copyright books, the settlement procompetitively expands output by clarifying which books are in the public domain and making them digitally available for free. For claimed in-copyright books, the settlement procompetitively expands output by clarifying who holds their rights, making them digitally searchable, allowing individual digital display and sales at competitive prices each rightsholder can set, and creating a new subscription product that provides digital access to a near-universal library at free or competitive rates. For unclaimed in-copyright books, the settlement procompetitively expands output by helping to identify rightsholders and making their books saleable at competitive rates when they cannot be found. The settlement does not raise rival barriers to offering any of these books, but to the contrary lowers them. The output expansion is particularly dramatic for commercially unavailable books, which by definition would otherwise have no new output.
Also in the Elhaugian tradition is Yuan Ji’s Why the Google Book Settlement Should Be Approved: A Response to Antitrust Concerns and Suggestions for Regulation. Ji’s paper is notable for its Part III, which compares the settlement with other routes towards similar goals: such as compulsory licensing of orphan books to other competitors. Ji concludes that the settlement is superior to the status quo and to its major proposed alternatives, but could potentially be improved by adding an ASCAP/BMI-style consent decree. Judge Chin might also consider conditioning his approval on an independent validation of Google’s pricing algorithm by an outside entity. Here is Ji’s abstract:
This Article advocates for the approval of the pending Google Book Search settlement by responding to the antitrust concerns arising from the Amended Settlement Agreement. It contributes to existing commentaries on the settlement by pointing out that the proper antitrust analysis must take into account Google’s role as a two-sided platform, which serves two interdependent sets of customers. The settlement, if approved, will not grant exclusive orphan book access to Google or anticompetitive pricing power to the Rightsholders. Post-settlement regulatory alternatives are explored and the compulsory licensing of orphan books is rejected. Instead, this Article advocates for the explicit grant of licensing power to the Unclaimed Works Fiduciary and the Registry if the settlement’s legal ability to do is in dispute. Given GBS’s natural monopoly characteristics, another regulatory option is the imposition of a consent decree similar to those that ASCAP and BMI operate under.
Also of note: Ji is currently a law student. Like Eric Fraser and Chris Suarez,
he’s she’s (my apologies for the error!) made a meaningful contribution to the public debate. While it can be a tremendous schlep to get up to speed on all of the legal details, the Google Books settlement remains a great subject for student writing. There’s simply so much to think about in it that it’s easy to find unturned stones. I would encourage any law students out there who are looking for note topics to consider writing on the settlement, and would be happy to talk about possible angles.
Google has been hosting Google Books at books.google.com. But, since February of 2004 (well before Google announced its foray into scanning books), googlebooks.com has been registered to someone else. Who precisely isn’t clear, since the domain was registered through a proxy service. The site currently hosts an ad for a product called “Google Nemesis” — which appears to be a mash-up between a standard Make Money Fast multi-level marketing scheme and an SEO keyword optimizer. CLick through any of the links, though, and the site reports that “DJK Nemesis” is sold out. (How, one might ask, can a product that consists entirely of software, information, and access to a web site sell out? Good question.)
Google filed for a UDRP arbitration last month to wrest control of the googlebooks.com domain from its current owner. That current owner never responded to the complaint. On June 10, the arbitration issued a ruling accepting Google’s allegations that the domain infringed on its trademarks and was being used in bad faith. As a result, he ordered that it be transferred to Google.
I’ve uploaded a draft version of my latest paper, The Elephantine Google Books Settlement. It’s forthcoming in the Buffalo Intellectual Property Law Journal as part of a symposium issue on Google. The paper is the written-out version of talks I’ve been giving for the last few months on the theme of how best to think about the settlement. While it raises class action, copyright, and antitrust issues, you have to smoosh the three bodies of law together to really wrap your mind around it. Here’s the abstract:
The genius—some would say the evil genius—of the proposed Google Books settlement is the way it fuses legal categories. The settlement raises important class action, copyright, and antitrust issues, among others. But just as an elephant is not merely a trunk plus legs plus a tail, the settlement is more than the sum of the individual issues it raises. These “issues” are, really just different ways of describing a single, overriding issue of law and policy—a new way to concentrate an intellectual property industry.
In this essay, I will argue for the critical importance of seeing the settlement all at once, rather than as a list of independent legal issues. After a brief overview of the settlement and its history (Part I), I will describe some of the more significant issues raised by objectors to the settlement, focusing on the trio of class action, copyright, and antitrust law (Part II). The settlement’s proponents have responded with colorable defenses to every one of these objections. My point in this Part is not to enter these important debates on one side or the other, but rather to show that the hunt to characterize the settlement has ranged far and wide across the legal landscape.
Truly pinning down the settlement, however, will require tracing the connections between these different legal areas. I will argue (Part III) that the central truth of the settlement is that it uses an opt-out class action to bind copyright owners (including the owners of orphan works) to future uses of their books by a single defendant. This statement fuses class action, copyright, and antitrust concerns, as well as a few others. It shows that the settlement is, at heart, a vast concentration of power in Google’s hands, for good or for ill. The settlement is a classcopytrustliphant, and we must strive to see it all at once, in its entirety, in all its majestic and terrifying glory.
Check back soon for the finalized version, which I’ll post as soon as available.
Every gamer and lawyer should play at least one of the Phoenix Wright games. I can’t imagine why anyone would want to play more than one.
My most recent paper, The Internet Is a Semicommons has just been published in the Fordham Law Review. It’s a mixture of property theory and Internet history; I argue that the conventional split between “private property” and “free for common use” on the Internet is overblown; both private and common need each other online. It turns out that it’s the dynamic interplay between the two that really enables worldwide collaboration while avoiding overuse.
This one has an interesting (i.e. long and involved) history. I had the basic ideas about four years ago. I was trying to think through, in a plausibly rigorous fashion, the question of why some online communities succeed and others fail. I was particularly interested in how both Metafilter and Slashdot had managed to build sustainable models of community discussion sites with very different approaches to the same basic problem. I started brainstorming and taxonomizing the moderation patterns that various sites use.
This is not that paper. What happened is that in order to put the moderation patterns on a firm theoretical footing, I needed to clear up some ambiguities in what legal academics meant when they talked about a “commons” online. The Internet isn’t really one—although it has significant common aspects—and I needed to explain why in order to make it clear precisely what problems moderation patterns solve. I presented the whole ball of wax—moderation patterns plus semicommons framework—a number of times in 2007 as “The Virtues of Moderation.” The feedback I got convinced me that while the ideas were interesting and worth pursuing, the agglomerated paper didn’t work. The two halves didn’t quite hang together, and although the paper was already sprawlingly large, both parts felt rushed. I started to think about splitting it into two halves, and then shelved it for a while as my search engine and social network scholarship occupied my attention.
Then, last spring, Abner Greene at Fordham invited me to be part of a symposium on David Post’s In Search of Jefferson’s Moose and Jonathan Zittrain’s The Future of the Internet. Both books are about how and why the Internet works, both on the level of individual communities and as a whole. It struck me that the semicommons framework I’d worked out to support my analysis moderation patterns connected very naturally to their books. The result was that I reworked the semicommons half of my paper into a discussion of Post’s and Zittrain’s books that brought out the property theory behind their visions of the Internet. “Reworked” may not quite give the flavor of it; I took the basic insight and more or less started writing from scratch. (And a good thing, too: Looking back recently over what I thought at the time was a really excellent draft from early 2007, I winced at the clunky writing and the rambling exposition.)
So this paper may have had a tortuous history, but I’m quite happy with it. Henry Smith’s original semicommons paper is brilliant, and I’m surprised that more law professors haven’t built on his ideas. My most extended case study in the paper is Usenet, which also makes fewer appearances in the law review literature than one would expect. I’m glad to be able to help fill these gaps. (To toot the horns of some friends, some of what has been written on both topics is quite good.) The editors at Fordham also did a great job; they pressed me to address the weaknesses of my argument while preserving the informal, focused tone of the piece. As with The Ethical Visions of Copyright Law, also published with Fordham last year, we really sweated the small details. I hope you approve of the results.
Here’s the abstract:
The Internet is a semicommons. Private property in servers and network links coexists with a shared communications platform. This distinctive combination both explains the Internet’s enormous success and illustrates some of its recurring problems.
Building on Henry Smith’s theory of the semicommons in the medieval open-field system, this essay explains how the dynamic interplay between private and common uses on the Internet enables it to facilitate worldwide sharing and collaboration without collapsing under the strain of misuse. It shows that key technical features of the Internet, such as its layering of protocols and the Web’s division into distinct “sites,” respond to the characteristic threats of strategic behavior in a semicommons. An extended case study of the Usenet distributed messaging system shows that not all semicommons on the Internet succeed; the continued success of the Internet depends on our ability to create strong online communities that can manage and defend the infrastructure on which they rely. Private and common both have essential roles to play in that task, a lesson recognized in David Post’s and Jonathan Zittrain’s recent books on the Internet.