This is an archive page. What you are looking at was posted sometime between 2000 and 2014. For more recent material, see the main blog at http://laboratorium.net
The Laboratorium
May 2012
Today, Judge Chin handed the Authors Guild a big procedural win. He issued an opinion that allowed the Guild to represent its members in the lawsuit, and then went on to certify a class consisting not just of the members but of all authors whose books Google scanned. He also allowed the American Society of Media Photographers to represent its members in the parallel visual artists’ lawsuit, along with the other artists’ groups who’ve joined together in that suit. This doesn’t resolve the merits of the lawsuit itself, but it does doom Google’s hopes of keeping the lawsuit from ever getting to the merits.
As usual, Chin’s opinion is brisk and readable. It is also eminently pragmatic. Chin recognized that to pass on the legality of Google’s scanning programs requires some kind of collective process. While he rejected the settlement last year, this time around he concluded that aggregate litigation is suitable for resolving the fair use questions at the heart of the dispute.
The opinion starts with associational standing. Ordinarily, only I can sue to vindicate my legal rights. Even if you think some grievous wrong has been done to me, it’s my lawsuit to bring (or not), not yours. You don’t have “standing.” But there’s an exception for associations, which can sue to vindicate the rights of their members, under limited circumstances. The opinion explores those circumstances, with application to the Authors Guild and other associations.
The key question at issue here was whether “neither the claim asserted nor the relief requested requires the participation of individual members in the lawsuit,” which focuses on “matters of administrative convenience and efficiency.” The associations simplified things by asking only for injunctive relief against future copying, rather than damages. Google simplified things by scanning lots of books without permission. (Here, as in many other places, Judge Chin characterizes Google’s conduct in ways that have to have its lawyers worrying: he emphasizes the lack of permission and the mass nature of its scanning and displays.)
Copyright ownership, Judge Chin concludes, will not require significant individualized proof. Google objected that the actual details will be highly complicated, given the diversity of contracts in the industry. But Judge Chin has a good comeback. Copyright registration records provide prima facie proof of ownership. In a footnote, he turns Google’s argument neatly back on Google: “To the extent Google wishes to rebut such evidence, it may seek to do so on a case-by-case basis.” Ouch.
There follows a beautifully pragmatic point. Yes, some authors will have assigned away their complete copyright interests, retaining no royalty rights, and therefore will not be “beneficial owners” with standing to sue. But it will be much easier to ask authors to produce their contracts to show that their books are included in the class than to force them to sue Google individually. This portion of the opinion offers Google its best news of the day, I think: the company could throw some serious sand into the class action gears by making thousands or millions of authors pull their contracts out of the closet.
Google also tried to argue that fair use is inherently an individualized, case-by-case determination. Judge Chin wasn’t buying. Again, his opinion is straightforward and pragmatic:
While different classes of works may require different treatment for the purposes of “fair use,” the fair-use analysis does not require individual participation of association members. The differences that Google highlights may be accommodated by grouping association members and their respective works into subgroups. For example, in the Authors Guild action, the Court could create subgroups for fiction, non-fiction, poetry, and cookbooks. In the ASMP action, it could separate photographs from illustrations. The Court could effectively assess the merits of the fair-use defense with respect to each of these categories without conducting an evaluation of each individual work. In light of the commonalities among large groups of works, individualized analysis would be unnecessarily burdensome and duplicative.
Makes sense to me. All of us who have been opining about the scanning project for years have reasons for our beliefs, even though we haven’t examined each book in the project individually. Judge Chin will now do the same, more officially. In this, he joins Judge Evans, of the Georgia State case, whose opinion also is willing to generalize across classes of books in the interest of producing workable rules.
As a coda to the associational standing section, Judge Chin offers a passage that really, really cannot be welcome news in Mountain View:
Furthermore, given the sweeping and undiscriminating nature of Google’s unauthorized copying, it would be unjust to require that each affected association member litigate his claim individually. When Google copied works, it did not conduct an inquiry into the copyright ownership of each work; nor did it conduct an individualized evaluation as to whether posting “snippets” of a particular work would constitute “fair use.” It copied and made search results available en masse. Google cannot now turn the tables and ask the Court to require each copyright holder to come forward individually and assert rights in a separate action. Because Google treated the copyright holders as a group, the copyright holders should be able to litigate on a group basis.
Google, which is about to file its motion for summary judgment on fair use, may not entirely mind having its project judged on a group basis. It can say, with a perfectly straight face, that it believed fair use was inherently book-by-book, but since the judge disagreed, it is willing to assert its fair use case on a blanket basis for the whole of the project. Indeed, by resolving the standing motion now, Judge Chin in a sense frees Google to make a stronger argument on the merits.
But, that said, Google cannot be happy with phrases like “sweeping and undiscriminating” or “unauthorized.” This paragraph, along with certain passages in the opinion rejecting the settlement last year, suggests that Judge Chin is casting a very skeptical eye on Google’s justifications for the scanning program. I have to wonder whether the settlement dance ended up hurting Google by making Judge Chin’s first substantive experience with the case one that emphasized the blanket nature and huge ambitions of Google’s scanning.
After this discussion, it’s readily apparent that Judge Chin is also going to grant class certification, as indeed he does. Google only seriously disputed two issues: whether the named plaintiffs are adequate representatives for the class, and whether common issues predominate over individual ones. Everything Chin wrote about individual issues in the standing context carries over: the facts of Google’s scanning and its fair use arguments can be evaluated across subclasses of books. When it comes time for damages or an injunction, authors may need to present proof of ownership. But the fact that some authors are sheep and some are goats doesn’t prevent the court from deciding what relief if any the sheep are entitled to—and only then separating them from the goats.
As for adequacy of representation, Google brought in a survey purporting to show that many authors perceive Google’s scanning programs as a benefit. “[W]ithout merit,” says Judge Chin. Class members’ interests may vary, but this is not a case in which some of them have legal claims that cannot be vindicated except by undermining others’ legal claims. As for the class members who don’t want their books taken out of Google Books, they can opt out. Indeed, even some authors who are happy that Google Books exists might still want to join in the class action.
This is not at all a decision on the merits. But it is still a very big deal, because it means that there will be a decision on the merits. The case is now definitively headed towards the gigantic fair use showdown everyone expected when it was filed in 2005. Google remains confident of its fair use case, I am sure, as the Authors Guild remains confident of its no-fair-use case. In the next few months, we will see the details.
Point to the plaintiffs.
Some Thoughts on Antitrust, Neutrality, and Universal Search
Cross-posted from the Antitrust and Competition Policy Blog
Update: I’ve written an extended version of this post for The Society for Computers and Law.
The heart of the gathering antitrust case against Google appears to be that it sometimes “manipulates” the order in which it presents search results, in order to promote its own services or to demote competitors. The argument has intuitive appeal in light of Google’s many representations that its rankings are calculated “automatically” and “objectively,” rather than reflecting “the beliefs and preferences of those who work at Google.” But as a footing for legal intervention, manipulation is shaky ground. The problem is that one cannot define “manipulation” without some principled conception of the baseline from which it is a deviation. To punish Google for being non-neutral, one must first define “neutral,” and this is a surprisingly difficult task.
In the first place, search engines exist to make distinctions among websites, so equality of outcome is the wrong goal. Nor is it possible to say, except in extremely rare cases (such as, perhaps, “4263 feet in meters”) what the objectively correct best search results are. The entire basis of search is that different users have different goals, and the entire basis of competition in search is that different search engines have different ways of identifying relevant content. Courts and regulators who attempt to substitute their own judgments of quality for a search engine’s are likely to do worse by its users.
Neutrality, then, must be a process value: even-handed treatment of all websites, whether they be the search engine’s friends or foes. Call this idea “impartiality.” (Tarleton Gillespie suggested the term to me in conversation.) The challenge for impartiality is that search engines are in the business of making distinctions among websites (Google alone makes hundreds of changes a year.)
A strong version of impartiality would be akin to Rawls’s veil of ignorance: algorithmic changes must be made without knowledge of which websites they will help and hurt. This is probably a bad idea. Consider the DecorMyEyes scam: an unethical glasses merchant deliberately sought out scathing reviews from furious former customers, because the attention qua attention boosted his search rank. Google responded with an algorithmic tweak specifically targeted at websites like his. Strong impartiality would break the feedback loops that let search engines find and fix their mistakes.
Instead, then, the anti-manipulation case hinges on a weaker form of impartiality, one that prohibits only those algorithmic changes that favor Google at the expense of its competitors. Here, however, it confronts one of the most difficult problems of high-technology antitrust: weighing pro-competitive justifications and anti-competitive harms in the design of complicated and rapidly changing products. Many self-serving innovations in search also have obvious user benefits.
One example is Google’s treatment of product-search sites like Foundem and Ciao. Google has admitted that it applies algorithmic penalties to price-comparison sites. This may sound like naked retaliation against competitors, but the sad truth is that most of these “competitors” are threats only to Google’s users, not to Google itself. There are some high-quality product-search sites, but also hundreds of me-too sites with interchangeable functionality and questionable graphic design. When users search for a product by its name, these me-too sites are trying to reintermediate a transaction that has very little need of them. Ranking penalties directed at this category share some of the pro-consumer justification of Google’s recent moves against webspam.
A slightly different practice is Google’s increasing use of what it calls Universal Search, in which it offers news, image, video, local, and other specialized search results on the main results page, intermingled with the classic “ten blue links.” Since Google has competition in all of these specialized areas, Universal Search favors Google’s own services over competitors’. Universal Search is an obvious departure from neutrality, whatever your baseline—but is it bad for consumers? The inclusion of maps and local results is an overwhelming positive: it saves users a click and helps them get the address they’re looking for more directly. Other integrations, such as Google’s attempts to promote its Google+ social network by integrating social results, are more ambiguous. Some integration rather than none is almost certainly the best overall design, and any attempt to draw a line defining which integration is permissible will raise sharp questions about regulatory competence.
Some observers have suggested not that Google be prohibited from offering Universal Search, but that it be required to modularize the components, so that users could choose which source of news results, map results, and so on would be included. This idea is structurally elegant, but in-house integration also has important pragmatic benefits. Google and Bing don’t just decide which map results to show, they also decide when to show map results, and what the likely quality of any given map result is compared with other possible results. These comparative quality assessments don’t work with third-party plugin services.
It makes sense for general-purpose search engines to turn their expertise as well to specialized search. Once they do, it makes sense for them to return their own specialized results alongside their general-purpose results. And once they do that, it also makes sense for them to invite users to click through to their specialized subsites to explore the specialized results in more depth. All of these moves are so immediately beneficial to users that regulators concerned about Universal Search should tread with great caution.
For more on these issues, see my papers Some Skepticism About Search Neutrality, The Google Dilemma, and The Structure of Search Engine Law.
This is a brief observation about the central role that error costs must play in any discussion of orphan works policy. I made it at the Berkeley orphan works conference last month. By popular request (okay, by one person’s request), I’m putting it online here.
The first, and most obvious, errors are those made by copyright owners whose works become orphaned. Works don’t end up orphaned unless there’s been a mistake by the copyright owner. They make mistakes about whether they’re copyright owners, about whether they’re findable, and about whether there’s a potential audience interested in their works.
But these errors interact with errors made by potential users of the works. If users knew with certainty whether copyright owners would emerge and object to possible uses, there’d be no orphan works problem, because every search would lead either to genuine negotiations or to use without fear of suit. False negatives that expose users to the risk of being sued and copyright owners to mistaken uses; false positives chill use without benefitting copyright owners.
And finally, there are error costs in the judicial system, which magnify the effects of errors at the previous two stages. They award remedies more than sufficient to compensate copyright owners, or they fail to award sufficient remedies. And the same problems face any system for dealing with orphan works: it could mistakenly declare that works are orphan when they’re not, or vice versa, and that searches were diligent when they weren’t, or vice-versa.
The point is that the ubiquity of errors isn’t just an incidental feature of the orphan works debate: it’s the defining reality that causes there to be an orphan works problem at all, and with which any response to the problem must grapple.
GBS: Oral Argument Report in HathiTrust
Yesterday, Judge Baer held an hour-long hearing in the HathiTrust case. Although most of the time was spent on procedural matters, the Authors Guild’s lead attorney, Edward Rosenthal did a very effective job leveraging them into substantive points.
The first problem for the court was a discovery dispute. Some of the plaintiffs live far from New York and have objected to having their depositions taken there, and a fourth, J.R. Salamanca, is in ill health and bedridden. After some discussion not worth recounting, the defendants’ attorneys agreed to take the deposition of Salamanca’s literary agent instead, and the two sides agreed on logistical arrangements for the others.
The most significant consequence of the deposition skirmish is that the close of discovery has been effectively pushed back. It had been scheduled to be finished by May 20, which is self-evidently impractical now that some depositions won’t even happen until next week. Instead, it now appears that discovery will last until June 8. This fact puts pressure on the schedule for summary judgment. Judge Baer had asked for the motions for summary judgment to be fully briefed by July 20. But allowing the necessary time for each side to respond to the other’s papers means that the actual motions would need to be filed in mid-June, i.e. uncomfortably close to the end of discovery. Judge Baer at one point asked parties if they could finish their briefing by the start of July so he could “put it under his pillow” when he goes away for the month. They agreed to go off and discuss the schedule, but I’d be quite surprised if the summary judgement deadlines were moved up.
And this scheduling tempest will spill over beyond its teapot: it seems likely to shape how the case will be argued. Joseph Petersen, appearing for the HathiTrust, tried to suggest that a quick ruling from Judge Baer on the motions for partial judgment on the pleadings (HathiTrust’s on associational standing and the Authors Guild’s on the applicability of copyright defenses) would help winnow the issues in the case, making for more narrowly focused summary judgment motions. Judge Baer wasn’t buying. He said, gruffly, that he was inclined to hold over these issues and decide them together with the summary judgment motions. This isn’t good news for HathiTrust, for reasons shortly to become apparent.
The first phase of the substantive oral argument dealt with HathiTrust’s motion to have the Authors Guild and other associations removed from the case for lack of standing (leaving only the individual plaintiffs). W. Andrew Pequignot delivered the argument in a style familiar to anyone who’s watched a moot court. He give a clear, but completely wooden, summary of HathiTrust’s argument against the associations, focusing on the argument that each copyright plaintiff must prove individual ownership of the works on which they sue, so that an association would need to present individual facts for every one of its members. The judge tried to ask him what practical difference associational versus individual standing would make if HathiTrust ended up losing on the merits, a question which raises subtle questions about the scope of a possible injunction, but Pequignot didn’t engage with the question.
Ed Rosenthal then gave the Authors’ Guild reply, and showed why he’s the chair of his firm’s IP and litigation groups. The defendants copied ten million books, he said, in an act of “preemptive mass digitization,” and now they want to look at individual books in evaluating standing. The response to Rosenthal’s point, if there is one, is that the Copyright Act really does require proof of individual ownership, a requirement that has nothing to do with whether the infringer is accused of copying one book or a million. Rosenthal could have replied by saying that this would leave copyright owners without a way to challenge mass infringement, and the defendants’ natural surreply would have been that individual lawsuits would be more appropriate. But that last point was precisely the question Pequignot ducked—thereby not only ceding much of the standing issue but also the rhetorically intuitive high ground.
That mattered, because Rosehtnal used the standing issue as a pivot to his argument that HathiTrust’s copying was substantively impermissible under the Copyright Act. Having set up the issue as a mass challenge to mass digitization, he was ready to roll with his argument that Section 108 provides the only relevant permission for copying here, permission that HathiTrust has far exceeded in copying books wholesale rather than retail. Thus, he claimed, the associations were the perfect plaintiffs to mount a program-wide challenge.
HathiTrust’s next moot court argument came from Allison Roach, who argued that no one had standing to challenge the Orphan Works Program since no identifiable books with copyrights owned by any of the plaintiffs had been made available or were in imminent likelihood of being made available through the OWP. Judge Baer was skeptical, saying that he was bothered that the libraries did “all of this” before there was an opportunity for plaintiffs to complain. Roach said was that no books had been made available, only a list of candidates, and that the plaintiffs were asking for an injunction against the entire Orphan Works Project without concrete facts about specific books it would infringe.
Rosenthal’s response here was a little less vivid. He emphasized that the University of Michigan had set up a mechanism for its orphan works. Some plaintiffs found their books on the list; the University suspended the program. If, he argued, this meant there was no right to object because there was currently no program, then there would never be a circumstance in which the program’s legality could be addressed. Any copyright owner who tried to object would be defined out of the class of copyright owners with standing to object, and this couldn’t be. (His point illustrates why the standing argument may be too clever by half when it comes to the Orphan Works Program, and why suspending the program might end up being ineffective in insulating it from judicial review.)
This brought the court to the plaintiffs’ motion for judgment on the pleadings that the libraries couldn’t raise fair use, Section 108, or other Copyright Act defenses. Here, Rosenthal led off by arguing that Congress passed a specific statute with directions for libraries, which the defendants disregarded. He then acted annoyed that the defendants, in their responses (see the bottom of our page on the case) characterized this as a broader attempt to stop libraries from claiming fair use, ever. No, Rosenthal said, the plaintiffs don’t argue against other library uses, just that they can’t digitize every book. They chose to scan in a large project, and the burden should be on them to justify that project. Once again, it was an oral advocacy gem.
Joseph Petersen then gave a rebuttal that ran through HathiTrust’s brief. The plaintiffs, he said, tried to argue that libraries have no fair use rights, but only the specific rights granted in Section 108. When shown how absurd it would be to claim that libraries alone in society have no fair use rights, the plaintiffs changed course and argued that the case isn’t about library copying in general, but only about this program. And this, he said, showed why this issue wasn’t appropriate for the “rule 12” context (i.e. a motion for judgment on the pleadings): it obviously depends on specific facts about the libraries and what they’re doing. He then recounted, quickly, some of the libraries’ arguments about the symbiosis between Section 108 and fair use, about the noncommerciality of the project, and about the text of Section 108.
He was followed by Daniel Goldstein, on behalf of the National Federation of the Blind. He ran through some of the history of accessibility of books to the blind, and emphasized that digitizing books brings the number of accessible titles from tends of thousands to tens of millions. Now, blind and visually disabled students can access HathiTrust’s digital database (when they provide appropriate certification of their disability). They’re the only group that has access to the database, but now they have equal access to the books themselves as sighted students would. He used this to argue that the plaintiffs’ assertions about categorical exclusions from fair use and other copyright defenses would tell libraries that they can’t make the copies for the print-disabled that they need to to comply with the Americans with Disabilities Act and the Rehabilitation Act.
All in all, yesterday’s skirmish was a minor one in the arc of the case. The discovery disputes were sorted out, and the schedule will be. Because Judge Baer strongly signaled that he’ll put the immediate motions off until he considers the summary judgment motions, that just puts the interesting and important issues off until the even more interesting and important summary judgment ruling.
Still, the skirmish was a clear win for the Authors Guild and its co-plaintiffs. Rosenthal made common-sense arguments about standing that—from the audience at least—seemed like they were persuasive to Judge Baer. He leveraged his responses to the defendants’ motions on standing to bolster his own argument on the applicability of fair use. And because Judge Baer is likely to hold the present motions over, he put the defendants in the difficult position of arguing that they are entitled to a blanket fair use defense at the same time as they argue that fair use is a fact-specific inquiry requiring individual participation.
The defendants’ decision to press the standing issues, at least in the way they did, now appears like a mistake. Both at the hearing and in the case overall, the plaintiffs have been able to use their responses to the standing motion to wrong-foot HathiTrust and take control of the case’s timing. As readers of this blog know, I don’t think much of the plaintiffs’ own judgment on the pleadings motion, but I have to give them and their lawyers credit for using it at the hearing to define the narrative of the case on their terms. They chose their counsel well.
The other matter on display yesterday is how different Judge Baer is from Judge Chin. Where Chin’s attitude is generally thoughtful and gentle, Baer tends more towards the gruff and the impatient. (It may not have helped that the hearing was sandwiched between three criminal matters and an afternoon of conferences, and that Judge Baer’s, schedule as he announced, had no room for lunch.) His fast-track schedule for the case is an indication of where Baer’s priorities lie, and my sense is that he saw the hearing more as a way to keep the case moving properly than as an occasion for deep reflection on the issues.
Assuming no curveballs, the next major dates in these cases will be in mid-June, when a variety of major motions will fall due. Motions for summary judgement will be due June 14 in Authors Guild, the visual artists’ motion for class certification will be due June 13, and summary judgment motions in the HathiTrust case will arrive somewhere around then, too, depending on what the parties agree to.
Spam Alert: The Institute for Cultural Diplomacy
In the past few years, I’ve received numerous emails from the Institute for Cultural Diplomacy announcing upcoming conferences and educational programs. The messages say, “If you do not wish to receive emails from the ICD in the future, please send us an email to info@culturaldiplomacy.org indicating this.” I have, six times, spread out over half a year. It didn’t work. Twice, I cc:ed Mark Donfried, the ICD’s “Director and Founder,” over whose name the emails are written. I never received a response from him, just more spam. I called the ICD’s office, in Germany, and asked to be removed from their list. The woman who answered the phone promised I would be. She lied: the email continued.
This is unethical behavior, inconsistent with the values the ICD supposedly represents. It’s disrespectful, dishonest, and disreputable. I doubt that any of my readers run in ICD circles, but if you do, please think hard about what it says about the ICD as an organization.
Inside the Georgia State Opinion
On Friday, the long-awaited decision in the Georgia State e-reserves case (a.k.a. Cambridge University Press v. Becker) dropped. By way of context, the case is a challenge by three academic publishers (Oxford University Press, Cambridge University Press, and Sage Publications) against Georgia State University’s e-reserves policy. The publishers sued in April 2008, in a lawsuit funded by the Association of American Publishers and the Copyright Clearance Center, claiming that the e-reserves policy went far beyond the bounds of fair use. Georgia State, as a state university, invoked the doctrine of sovereign immunity, the practical implication of which is that the publishers can only obtain injunctions against future infringements, not damages for past infringements. Since it also tightened up its e-reserves policy in December 2008, it also successfully argued to the court that only the uses made under the new policy should be relevant to any potential injunction.
There was a trial a year ago, and then long silence from the court. Now we know why it was taking so long: the opinion is 350 pages. That number is a little misleading, in that over two thirds of the opinion are dedicated to a highly methodical copyright ownership, infringement, and fair use analysis of seventy-four separate claims of infringement, using standard templates and highly repetitive language. Having now dug through the details, I’d like to offer a few observations.
First, over a third of the claims didn’t even make it to the fair use stage at the heart of the case. In many cases, the publishers were unable to prove to the court’s satisfaction that they owned the copyright in the portions of the books that were copied and uploaded. Sometimes they couldn’t produce a timely registration certificate and there were proof problems with originality; sometimes they couldn’t find a work-made-for-hire agreement or copyright assignment from the authors of individual chapters in edited volumes. The court was unsympathetic: no documented chain of title, no lawsuit. There’s a looming e-rights mess, loosely akin to the robosigning mess around ownership of securitized mortgages: in both cases, the putative owners don’t have all their papers in order. This opinion either recognizes or contributes to the mess, depending on your point of view.
Other claims dropped out before the fair use stage because they were uploaded to the e-reserves system but never downloaded by students. The court dismisses these from the lawsuit as de minimis, explaining that these uses by the University, while technical implicating the copyright owners’ exclusive rights, don’t affect the incentives for authors to create. This puts more teeth in the de minimis doctrine in copyright: it goes beyond the view that de minimis means “not substantially similar.” It also strengthens the argument that “internal use” copies never used to reach an to an audience that reads them for their content don’t infringe. Think, for example, of the HathiTrust’s archive of scans from Google Books.
(As an aside, the e-reserve logfiles played a key evidentiary role in the case. Specific users were never identified, but if a file had a total hit count of two, it’s unlikely that students actually read it. This stands in contrast to other cases, like American Geophysical, which was tried by sampling: the parties selected a single scientist at random, examined his files looking for photocopies, and treated him as representative of a cohort of 500. Here, the logs permitted an analysis of the copying done for numerous faculty members—presumably all those who assigned any excerpts from any of the plaintiffs’ books.)
When the court did reach fair use, it held across the board that two of the four factors favored Georgia State. The purpose of the use, while not transformative, was nonetheless for highly favored educational purposes by a nonprofit institution. And the nature of the works was consistently informational.
On the third factor, the amount copied, the court repudiated the Classroom Guidelines, calling them “not compatible with the language and intent of § 107.” It noted that the numerical limits in the Guidelines are so stringent that not one of the excerpts at issue in the case would fit within them. It was particularly uninterested in the Guidelines’ position that copying not “be repeated with respect to the same item by the same teacher from term to term,” which the court described as “an impractical, unnecessary limitation.”
Instead, the court fashioned its own quantitative test. For books of nine or fewer chapters, the court set a threshold of 10% of the total page count; for books of ten chapters or more, the threshold was a single complete chapter. (The chapter-based rule creates an odd incentive for publishers to create books with a surfeit of tiny chapters.) Copying of any amount under this threshold, the court held, would be treated as “decidedly small.” In practical terms, this ended up being a one-sided bright-line rule: copying of less than 10% or one chapter always ended in a fair use win for Georgia State.
Finally, the fourth factor, the effect on the market, favored the publishers whenever CCC was offering a digital license for copying the book in question, and favored Georgia State whenever there was “no evidence in the record to show that digital excerpts from this book were available for licensing” as of the date of infringement.” In practice, this was another one-sided bright-line rule: no digital license meant an instant win for Georgia State. The court repeatedly emphasized that students would not have bought the assigned books as a substitute for the excerpts posted on the e-reserve system.
This treatment of licensing is likely to have significant implications. On the one hand, it suggests that libraries may have a freer hand to make expanded uses of orphan works, since by definition, no one will be licensing them. And on the other, the court didn’t consider photocopying licenses to be a suitable substitute for digital licenses. This will put significant pressure on publishers to turn on digital licensing.
Only in seven instances did Georgia State use more than 10% or one chapter of a book that was available for digital licensing. When this happened, the court took a more detailed look at the specifics of the book’s licensing market and the portion copied. Generally, this turned on whether the book made significant revenues via licensing: if so, the use was unfair. (In one instance, the court did a “heart of the work” analysis under factor three to find no fair use because the professor had assigned chapters that “essentially sum up the ideas in the book.”)
Thus, the operational bottom line for universities is that it’s likely to be fair use to assign less than 10% of a book, to assign larger portions of a book that is not available for digital licensing, or to assign larger portions of a book that is available for digital licensing but doesn’t make significant revenues through licensing. This third prong is almost never going to be something that professors or librarians can evaluate, so in practice, I expect to see fair-use e-reserves codes that treat under 10% as presumptively okay, and amounts over 10% but less than some ill-defined maximum as presumptively okay if it has been confirmed that a license to make digital copies of excerpts from the book is not available.
The most interesting issue open in the case is the scope of any possible injunction. Given that Georgia State won on sixty-nine out of seventy-four litigated claims, while the publishers won on only five, I expect that the any injunction will need to be rather narrow. But given how amenable the court’s proposed limits are to bright-line treatment, it is likely that the publishers will push to write them in to the injunction.
My bottom line on the case is that it’s mostly a win for Georgia State and mostly a loss for the publishers. The big winner is CCC. It gains leverage against universities for coursepack and e-reserve copying with a bright-line rule, and it gains leverage against publishers who will be under much more pressure to participate in its full panoply of licenses.
Google’s Wardriving: A Retrospective
We now know much more about the Google Street View WiFi story, thanks to Google’s decision to release an unredacted version of the FCC report, to the New York Times’s identification of the Google employee involved as Marius Milner, and to further reporting from Ars Technica. The picture it paints is in some respects more flattering to Google, and in some respects worse.
Milner is the creator of NetStumbler, a tool for detecting and analyzing WiFi access points. It makes sense in hindsight that he ended up using his 20% time for the part of the Street View project that aimed to build a database of WiFi networks. And it turns out that he thought about the ethics and legality of recording payload data. He appears to have read some law-review scholarship on wardriving. He considered potential privacy issues, and concluded that the mobility of the Street View cars would minimize the risk of extensive data-gathering from any one user. Further, he emphasized that none of the data would be shared with Google users.
This is, I have to say, above the baseline of ethical cognition for programmers. Looking to legal scholarship at all is quite unusual. In fact, Milner’s thoughtfulness strikes me as roughly par for the course for front-line Google technologists. It’s a company that hires reasonably thoughtful people and encourages them to think about the implications of what they do for society, both good and bad.
But if Google is a company of smart, reflective, and well-intended individuals, collectively they make bad choices. Milner put his privacy concerns and the details of the WiFi payload recording in a design document. The document included a “to do”: “[D]iscuss privacy considerations with Product Counsel.” He talked to a member of the search quality team about the idea; he circulated the design document together with his code to Street View’s project leaders, who forwarded it to the entire Street View team. And he exchanged emails with other Street View programmers and managers that made clear Google was collecting payload data. But nothing happened. For fifteen months, Google Street View cars sucked up and recorded WiFi payload data.
As I said in an earlier post:
When it comes to privacy, this is a company out of control. Google’s management is literally not in control of the company.
Google’s Street View managers failed badly at their jobs. One of them “pre-approved” the design document before it was written, demonstrating complete failure to understand the purpose of managerial review. No one followed up to make sure the discussion with Product Counsel actually happened. Other engineers read the design document and Milner’s code, but either missed the fact that it was collecting payload data or didn’t realize that this could be a potential issue. Again, this is a failure of management: it’s an important part of their job to make programmers aware of the possible legal trouble zones in the areas they’re working on.
Milner has invoked the Fifth and isn’t talking to reporters. He made a mistake, but he’s not a legal expert and it’s a bit unfair to expect him to be. No, his managers let him—and the rest of us—down.
Today, Judge Chin heard oral argument in the Google Books case. I couldn’t attend, due to a prior commitment, but three of my students were there. David Berson NYLS ‘12, Kristoff Grospe NYLS ‘12, and Raphael Majma NYLS ‘11 took detailed notes, and this post is based on their careful reporting.
There were two issues at the hearing today:
- Whether the Authors Guild and visual artists’ groups have “associational standing” to sue Google on their members’ behalf.
- Whether to certify a class of authors so that the Authors Guild’s lawsuit can proceed as a class action.
(One of the things Judge Chin established at the hearing was that if he grants class certification, then the associational standing of the Authors Guild becomes unimportant to the litigation. The reverse isn’t true: if the 8000-member Authors Guild is a proper associational plaintiff, certification of a much larger class is still a live issue.)
Argument started with the associational standing motion. Google’s lawyer, Daralyn Durie, argued that each author will present sufficiently different issues that their individual participation will be required. Those individual issues come in two flavors: it might be hard to prove that particular authors are “legal or beneficial owners” who are entitled to bring suit in the first place, and Google’s fair use defense might apply differently to different authors. Judge Chin pressed Durie on both theories.
As to the ownership theory, Google claims that there’s enormous diversity in publishing contracts in terms of the language they use. Since many authors don’t receive royalties directly attributable to the display of short excerpts by publishers, they’re not “beneficial” owners of the short-excerpt-display-right. The Authors Guild’s lawyer, Joanne Zack, on the other hand, argued that royalties are a typical feature of publishing contracts making the authors beneficial owners entitled to sue. She had a good backup argument: the Copyright Act makes copyright registrations prima facie evidence of ownership, so that any burden of proof would be on Google to rebut the presumption that a particular author retains her standing to sue.
Judge Chin, in his questions to Durie, cut straight through to another possibility: that proof of ownership could be deferred to the remedy stage of the lawsuit. That is, the Authors Guild could litigate on behalf of whichever of its members are copyright owners with standing to sue, whoever they are. If it wins, at that point the individual contracts could come into evidence in deciding which of them are entitled to damages or an injunction. It was a pragmatic argument, and Durie’s reply was more formalist: the Copyright Act requires that the plaintiffs be copyright owners in order for the court to decide the issue of infringement at all.
Chin also asked Durie whether Google really wanted to be litigating millions of ownership questions individually. Durie offered a reply that lawyers are likely to find elegant and non-lawyers frustrating. Federal civil procedure already has a good device for handling multiple cases with similar legal issues: collateral estoppel. If Google loses against the three named plaintiffs on a genuinely shared issue, then other authors will be able to come into court and take advantage of the ruling. Google will be “estopped” from raising the issue again in those “collateral” lawsuits brought by other authors.
As for the diversity of possible fair use evaluations, the parties dueled over evidence. Google has a survey showing that some authors perceive a benefit from being included in Google Book Search (thereby showing divisions among authors). Durie needled the Authors Guild for not having its own survey, indeed for not even canvassing its own members. The Authors Guild, on the other hand, has a pair of expert reports that it claims help establish common economic harm to authors; Durie hinted at a few reasons that the reports should be excluded from consideration until later in the case, when Google will have had more of a chance to depose the experts and prepare a reply. (Personally, I think the reports are so tangential to the issues of common harm that it makes little difference; they’ll be relevant only if and when the case reaches the fair use merits.) Judge Chin, with what I can only imagine was a poker face, said he would look at both the survey and the expert reports and ask for additional submissions if he had doubts or questions.
Chin also pressed a theme that the Authors Guild has emphasized in its discussions of the case: Google scanned books en masse, so why should it suddenly insist that individualized treatment is necessary? Durie emphasized that Google doesn’t treat all books identically: dictionaries, for example, are excluded from snippet display. But, Judge Chin asked, surely there are only a finite number of different classes: poetry, cookbooks, fiction, how many could there be? And here Durie conceded that yes, Google makes its decisions on a “categorical” basis by type of work.
James McGuire, on behalf of the visual artists, added a few points specifically on their behalf. The cleverest was that it wasn’t just twenty million books that Google has scanned, but twenty million covers. There, since the fair use arguments don’t revolve around snippets, but rather entire covers, presumably there will be much less individual variation. On the whole, though, he was content to rest on the artists’ brief, which, let it be said, is both well-drafted and in Garamond.
Turning to class certification, the parties had relatively less to say, in substantial part because so many of their points had already been aired. Judge Chin rushed Zack through her argument: his only real question was to prompt her for a response to Google’s argument that fair use determinations for snippet display are inherently individualized. Her response was that Google, in its actual fair use defense on the merits, won’t actually be raising individualized issues.
Zack used some of Google’s answers to interrogatories (formal questions directed to the other side in a lawsuit that require it to state clearly the legal theories it will be using and the factual bases behind them) to claim that none of its actual fair use defense will be genuinely individualized. Unfortunately, since those documents aren’t yet part of the public court record, I can’t share them here or comment on what they say. Still, I previous called Google’s arguments here “small beer” and it seems like Google hasn’t really found a way to distinguish some authors from others in ways that really require individualized fair use assessments.
Durie dwelt less on the commonality of Google’s conduct than on the differences in authors’ circumstances. Some authors will benefit from being in Google Book Search; others won’t. Judge Chin again raised the subsets-of-authors argument: can’t we just divide authors up into, say, eight or nine groups (in-print fiction, out-of-print academic monographs, etc.) and deal with those groups separately? Durie, pointing to the survey, argued that analyzing the effect of Google Books on book sales really does require examining the effect on each author individually because their circumstances vary widely.
There was an argument notable by its absence. The academic authors sharply objected to the Authors Guild as a representative of their interests. But Durie didn’t pick up on their argument at all. She cited the academics for the benefits they receive from Google Book Search (viz. dissemination of their ideas and assistance in research), but not at all for their complaints about the non-academic bent of the Authors Guild and the named plaintiffs. Similarly, while Google could in theory have raised some of C.E. Petit’s points (esp. 3, 4, and 6) against certification, it didn’t.
A few further points got some airtime; it’s unclear how receptive Judge Chin was to either of them. Zack tried to argue that the objection to standing was untimely, coming as it does six years into the lawsuit. (Of course, the parties were rather busy being bestest friends for the majority of that time …) And Durie signaled that Google will be trying to object to the Authors Guild’s theory that it “distributed” the scans to libraries, because it hasn’t distributed any material objects to them. (Caselaw is generally against this argument, but not so squarely that it’s clearly foreclosed.)
At the end of the hearing, Judge Chin surprised no one by reserving decision. The parties will go ahead with their summary judgment motions, with oral argument coming in September. (Reminder: the Public Index now has a timeline of upcoming dates in the cases.)
A few general observations. First, Judge Chin’s questions were thoughtful. He wasn’t trying to press the parties on their weak spots; his questions were clearly directed to clarifying where the key areas of dispute were. Second, at least from the perspective of someone who wasn’t in the courtroom, the case was well-argued on both sides. Zack and McGuire seem to have a slightly easier case on these motions, and they extracted some concessions from Durie with Judge Chin’s help. But for her own part, she made some good points: the subtle but well-argued kind that one would expect from a real pro.
Third, the parties danced a bit around one of the key questions: what, precisely, is the allegedly infringing conduct for which the Authors Guild seeks to hold Google liable. Durie suggested at one point that the “right” at issue is the right to display a small excerpt of a book. Zack didn’t reply directly, but in other briefs and arguments, the Authors Guild has framed the case as being about the mass scanning, the distribution of copies to libraries, and the security risks of holding a complete corpus. This is presumably going to be sorted out sooner or later, quite possibly by Judge Chin himself.
It’s hard to predict what will happen next. My uninformed read is that today was a tactical victory for the plaintiffs: Google didn’t offer a compelling argument for why the case can’t proceed as a collective lawsuit. But that may not be strategically significant: the case is clearly heading towards the real battle over fair use, and I didn’t get the sense that the Authors Guild significantly improved its position in terms of selling Judge Chin on its claim that scanning and indexing is unfair. That may just indicate that Judge Chin, quite properly, is focused on the procedural motions currently in front of him. Or it could be a sign that the Authors Guild doesn’t have enough arrows in its quiver to hit the no-fair-use target.
Stay tuned …