The Laboratorium
March 2012

This is an archive page. What you are looking at was posted sometime between 2000 and 2014. For more recent material, see the main blog at http://laboratorium.net

And We Are Live at Ars Technica


I said there was more Sealand news coming, and here it is! Ars Technica has published a shorter spinoff of my Sealand article, under the title Death of a Data Haven. This version focuses on the rise and fall of HavenCo, with brief excursions into Sealand’s history and my explanation of why its failure was overdetermined. It was also my chance to bring back a few outtakes from the longer article, such as the Toxic Barge Project. Ars did its usual crackerjack editing and illustration job. I’m really happy with the end result—think of it as the short article I initially set out to write, before realizing that I’d need to research and write the long version first.

Here’s the introduction:

A few weeks ago, Fox News breathlessly reported that the embattled WikiLeaks operation was looking to start a new life under on the sea. WikiLeaks, the article speculated, might try to escape its legal troubles by putting its servers on Sealand, a World War II anti-aircraft platform seven miles off the English coast in the North Sea, a place that calls itself an independent nation. It sounds perfect for WikiLeaks: a friendly, legally unassailable host with an anything-goes attitude.

But readers with a memory of the early 2000s might be wondering, “Didn’t someone already try this? How did that work out?” Good questions. From 2000 to 2008, a company called HavenCo did indeed offer no-questions-asked colocation on Sealand—and it didn’t end well.

HavenCo’s failure—and make no mistake about it, HavenCo did fail—shows how hard it is to get out from under government’s thumb. HavenCo built it, but no one came. For a host of reasons, ranging from its physical vulnerability to the fact that The Man doesn’t care where you store your data if he can get his hands on you, Sealand was never able to offer the kind of immunity from law that digital rebels sought. And, paradoxically, by seeking to avoid government, HavenCo made itself exquisitely vulnerable to one government in particular: Sealand’s. It found that out the hard way in 2003 when Sealand “nationalized” the company.

And We Are Live at the University of Illinois Law Review


The final version of my article Sealand, HavenCo, and the Rule of Law is now live at the University of Illinois Law Review’s site. Since roughly this time last year, two things have happened to it. First, it’s received a truly careful editing job from the Law Review’s staff. Max DeLeon (the lead article editor on the piece) and Brittany Viola (the editor-in-chief) went above and beyond the call of duty, checking and rechecking the details, calling me out on anything that wasn’t properly supported or didn’t read well. Sometimes I carp about the law-review system, but there are times that it works well, and when it works well, it works like this.

Second, I’ve been able to examine several thousand pages of Sealand-related files from the United Kingdom’s National Archives. The government’s files from the most important periods in Sealand’s history have all been opened for public access, and they shed significant light on how it is that Roy Bates and his family have been able to remain in possession of Sealand for so long. The article now contains a detailed reconstruction of the bureaucratic fumbling that helped Roy Bates transform himself into a comedic folk hero in 1967 and 1968. It also tells a more complete history of the bizarre “coup” of 1978, including some darkly hilarious behind-the-scenes international diplomacy.

Please check it out. I hope you have at least half as much fun reading it as I had writing it. And stay tuned for some further Sealand-related news …

Whereof One Cannot Speak


There’s a simple reason why I haven’t commented on the Department of Justice’s warning to Apple and publishers over the agency model for e-book pricing: I don’t understand the issues yet. Unless and until I do, I know better than to hold forth on them.

Antitrust economics is specialized, technical, and unintuitive. Everyone can understand the basic idea that a cartel to hold up prices will hurt consumers more than it benefits the cartel. But beyond that point, things get hairy quickly. A great many plausible-sounding arguments are either internally inconsistent or unmoored from marketplace realities. Other claims are true about some industries at some times, and false in other industries at other times. Sorting good from bad takes not just care but a willingness to grapple both with formal models and empirical data. I’m not an expert in this; my progress here is slow and halting.

Antitrust law is even harder. Its division of conduct into various silos (“horizontal agreements,” “resale price maintenance,” etc.) creates a layer of serious technical complexity. And not only is it caught caught up in the raging theoretical debates about what economic principles are actually true in the real world, but it’s also thoroughly concerned with error and administrative costs. Antitrust doctrine is incomprehensible unless you recognize that it is driven by the expectation that courts will regularly get things wrong: either condemning pro-consumer practices or blessing anti-competitive ones. Again, I’m not an expert, just plausibly informed on a few narrow slices of antitrust law.

I’ve been reading all of the commentary I can find about the agency-pricing investigation, and I have one thing to say to 95% of the commentators. Please stop. If you haven’t done an antitrust analysis from the top, or read one carefully, or been doing this for so many years that you don’t need to, you literally don’t know what you’re talking about. This is not an area in which knowledge of the industry and layman’s intuition about economics will guide you to approximately the right answer. Believe me when I say that I do not know what the best outcome is here.

Cato Versus Caesar


Almost forty years ago, Charles Koch co-founded what is now known as the Cato Institute, the nation’s preeminent libertarian think tank. It had an unusual management structure for a non-profit: the five founders were made equal shareholders. Over time, Cato’s and Koch’s views drifted apart; Cato is relatively more committed to libertarian policies, while Koch is relatively more committed to Republican electoral victory. (In David Auerbach’s terms, Cato is nearly pure Type L, while Koch has a substantial undercurrent of Type C.) Although Charles Koch and his brother David are both now shareholders in Cato, Koch money is only a small portion of its annual operating budget.

Last week, the Koch brothers sued to take control of Cato. Their theory is that under the shareholder agreement, they hold an option to purchase the shares of the recently deceased William Niskansen, giving them an absolute majority of the extant shares. Cato’s current president, Ed Crane, has called the move a “hostile takeover” and argued that it’s an attempt to “transform Cato from an independent, nonpartisan research organization into a political entity that might better support [the Kochs’] partisan agenda.” Numerous prominent libertarian commentators—see, e.g., Jonathan Adler and Julian Sanchez—have weighed in against the move, on the grounds that it would at the least undermine Cato’s perceived independence, and at the worst pull the organization away from libertarian principles.

The irony is thick. And I don’t mean this in a tone of Go it, husband! Go it, bear! schadenfreude. As neither a libertarian nor a Koch brother, I have no direct stake in the fight, nor do I think public debate in this country will gain much if that fight is painful and protracted. I don’t know who is right as a matter of non-profit law; I can say only that whoever is right on the law ought to win. I think Adler is correct that it’s not in the Kochs’ own interest to take over Cato this way—but even as a liberal, I can say that this is their own mistake to make.

No. The irony here is that the nation’s preeminent libertarians—who ought to be exquisitely attentive to freedom of contract, institutional design, and observing the letter of the law—couldn’t get their rights right. They built this Streeling of libertarian thought, with its $20+ million annual budget and world-wide reputation, on a shareholding structure that is either actually or nearly under the control of people who do not share many of their values and have not for decades. The entire enterprise may well have been for years only one death away from Koch domination. If so many libertarians are now so worried about a Koch takeover, one has to ask, why have they spent so many years building a brand with an unshielded thermal exhaust port?

The answers are obvious, and completely understandable. Because few people knew about Cato’s unusual share-based ownership structure. Because those few who knew didn’t think the Kochs’ power play was a serious possibility. Because Cato was there, and so it made sense as a coordination point, whatever its weaknesses. Because each individual project made sense, regardless of the long term. Because they never even thought to ask. All completely human, all quite arguably reasonable, and all things any of us would likely have done in the same position. And yet the end result could well be to deliver one of the world’s most recognizably libertarian institutions into the hands of men who would use it for other purposes.

I could not tell you how many times I’ve encountered libertarian arguments about law that assume that individuals can and ought to use contracts to protect themselves against just this sort of contingency. Don’t worry about users clicking “I agree” to overreaching terms of service; if they truly cared about the terms, they’d negotiate for better ones. Don’t worry about people who refuse to buy health insurance; they’re making a rational choice for themselves. Don’t worry about minority shareholders, don’t worry about franchisees, don’t worry about all the other groups that find themselves on the wrong end of a bargain that always seems to tip against them in the long run—if they wanted better protections, they could and should have negotiated for them up front.

Except they don’t. They never do. And really. If the uber-libertarians of the Cato institute can’t watch out for themselves, what hope is there for the rest of us?

GBS: Authors Guild Goes for an Early Knockout


In December, HathiTrust moved for “partial judgment on the pleadings” on the issue of associational standing in the parallel case against Google’s library partners. Judgment on the pleadings is an early pretrial tactic: the party asking for it, in essence, says that there’s no need to move to the fact stage of the lawsuit. Even if every single thing the other side alleges turns out to be true, it wouldn’t make a difference: the law still favors the moving party.

Well, two can play at that game. The Authors Guild and its allies filed their own motion on Tuesday for partial judgment on the pleadings. And this one is a doozy: it asks the court “to hold that Defendants’ mass digitization and orphan works projects are not protected by any defense recognized by copyright law.” If they win this motion, the case is all but over, and the libraries will almost certainly need to suspend their cooperation with Google and give up their digital copies of the books.

The motion deals with two sections of the 1976 Copyright Act that are expected to play leading roles in the libraries’ defenses: Section 107 on fair use and Section 108 on library copying. In many respects, the sections couldn’t be more different. Fair use is a standard: broadly and vaguely phrased, inherently case-specific, requiring elaboration by the courts. The library privileges are rules: narrowly and tightly phrased, far more mechanical in their application. And their interaction is … disputed.

Before the 1976 Act, there was no provision on library privileges in the Copyright Act. Instead, libraries relied on fair use when they photocopied materials for their patrons. The scope of that fair use defense, though, wasn’t determined by rulings from the courts. It was a mixture of custom, forbearance, confusion, and several sets of guidelines promulgated by different groups at different times, most notably a “Gentlemen’s Agreement” from 1935 that allowed individual reproductions for research purposes only. (For more on this pre-1976 history, see this background paper by Mary Rasenberger and Chris Weston and this paper on the Gentlemen’s Agreement by Peter Hirtle.)

This detente came under severe strain in the 1960s and 1970s under the influence of much better copying technologies. Patrons came to value the convenience of photocopying; libraries came to appreciate photocopying’s value in preservation; authors and publishers worried that photocopying would cut severely into their sales. There were proposals to codify library reproductions into hard-and-fast fair use rules in the Copyright Act, but libraries and copyright owners were enormously far apart on what those rules ought to say. Meanwhile, a lawsuit by the publisher Williams and Wilkins against the National Institutes of Health and the National Library of Medicine resulted in a single-judge ruling that library photocopying was not fair use, then a 4-3 ruling in the Court of Claims that it was, and then a 4-4 Supreme Court split decision, which had the effect of leaving the NIH’s victory intact while taking the Court of Claims’s opinion off the books as binding precedent. Uncertainty and more uncertainty.

As ultimately enacted, the 1976 Copyright Act punted on the question of what the general law of library copying under fair use ought to be. Section 108, with its detailed and narrow rules governing allowing certain types of archival copies and certain distributions of copies to patrons, was in the Act. But it contained two clauses whose meaning was never the subject of clear agreement among the various interest groups pushing and prodding Congress. First, there was the “systematic” exception:

(g) The rights of reproduction and distribution under this section extend to the isolated and unrelated reproduction or distribution of a single copy or phonorecord of the same material on separate occasions, but do not extend to cases where the library or archives, or its employee … (2) engages in the systematic reproduction or distribution of single or multiple copies or phonorecords of material described in subsection (d) [i.e., using the Section 108 privileges] …

And second, there was the fair use savings clause:

(f) Nothing in this section … (4) in any way affects the right of fair use as provided by section 107 …

The libraries took these to mean that library photocopying programs had been a fair use before the 1976 Act and would continue to be a fair use. On the other hand, copyright owners read these provisions to codify a few specific photocopying practices as being legal, while rendering the others categorically off-limits.

The Authors Guild’s new motion falls definitively in the latter camp. It repeatedly refers to the libraries’ actions as “violations” of Section 108, with the implication that to fall outside of its protections is to infringe. And it makes a detailed argument that the libraries fall outside each subsection of Section 108 that could possibly apply. This is the core of what the Authors Guild is going for: a judicial declaration that Section 108’s threshold conditions haven’t been met, taking it off the table as a possible defense early on. (I’ll discuss the details in a future post, once HathiTrust’s response brief is in.)

The part of the brief that drew the most attention last week—the fair use argument—is also the briefest. The Section 108 arguments take up twelve pages; the fair use arguments only three. But the Authors Guild’s argument here is aggressive and more than a little breathtaking:

Defendants will undoubtedly seek to defend themselves by arguing that their activities constitute fair use … However, rules of statutory construction, case law and legislative history definitively establish that Section 107 is unavailable to Defendants under these circumstances.

There’s a reason, though, why this sweeping argument—failure to qualify for Section 108 automatically disqualifies a library from claiming fair use—is relegated to the tail of the brief. It’s just not very strong, and the brief’s authors know it. Part I does an excellent job knocking down some of the specific Section 108 defenses, but Part II on fair use is tactical. It could wrong-foot HathiTrust’s legal team and force them to litigate fair use before they have developed sympathetic facts. It could dispose the judge to regard the fair use claims with suspicion from the start. It could fire up the Tea Party anti-library faction of the author community. All of these are part of a good litigator’s toolkit: confuse your opponents, sway the judge, please your client. But they shouldn’t be mistaken for an argument that the litigator expects to prevail.

The first problem is that judgment on the pleadings is far too early in the case for a fair use ruling. Fair use requires case-by-case balancing, which requires developing the facts that make that balancing possible. If there is any plausible set of facts consistent with the libraries’ arguments that would support a fair use claim, the Authors Guild’s motion must be denied. For purposes of the motion, everything the libraries allege—absolutely no effect on the market, perfect quality control in the Orphan Works Program, etc.—must be taken as true.

The only way the Authors Guild can get around the enormously high factual burden facing it at this procedural stage is to make a purely legal argument: that failure to comply with Section 108 categorically prevents reliance on fair use, across the board, no factual questions asked. But here, even its own sources betray it. The 1983 Copyright Office report the Authors Guild quotes on the relationship between Section 108 and fair use says only that fair use is “often clearly unavailable as a basis for photocopying not authorized by section 108.” Read that again: “often” unavailable, not “always” unavailable. This is not judgment-on-the-pleadings material.

The brief also features some creative but unpersuasive arguments about the fair use savings clause. First, it gives a standard specific-controls-the-general argument:

The savings clause cannot be permitted to supplant the specific limitations on library copying contained in Section 108. Further, the general language of a statutory provision, although broad enough to include it, will not be held to apply to a matter specifically dealt with in another part of the same enactment. (citations omitted)

But this gets the structure of the statute wrong: Section 108 contains additional defenses for libraries, not additional limitations on what they may do. The savings clause, therefore, doesn’t derogate from the specific statements of Section 108 in the slightest: nothing it does takes away from any of the library privileges that Section 108 creates.

Nor does the savings clause render the rest of Section 108 redundant, as the brief argues. Section 108 provides a clear and unambiguous but tightly circumscribed safe harbor for libraries: none of that is at all redundant with the usual case-by-case balancing tests of fair use. But to read Section 108 and fair use as incompatible, as the brief all but argues, would in effect read the savings clause out of the statute.

And then there is the brief’s discussion of another Copyright Act fair use saving clause, in Section 1201 of the DMCA:

Nothing in this section shall affect rights, remedies, limitations, or defenses to copyright infringement, including fair use, under this title.

In the famous DeCSS case Universal City Studios v. Corley the Second Circuit held that fair use was no defense to DMCA anti-circumvention liability. But—as the Second Circuit explained but the Authors Guild doesn’t—that was because the DMCA creates an independent form of circumvention liability that is different from infringement liability:

In the first place, the Appellants do not claim to be making fair use of any copyrighted materials, and nothing in the injunction prohibits them from making such fair use. They are barred from trafficking in a decryption code that enables unauthorized access to copyrighted materials.

That is, fair use as a defense to copyright infringement remains completely intact under the DMCA. Unlike the DMCA, however, Section 108 does not create new forms of liability, so that “violation” of it is not some new exotic action to which fair use does not apply. Failure to qualify for Section 108, per the text of the savings clause, simply kicks one back into the usual fair use balancing test.

The actual application of that test will be interesting and contested. It will also take place in the shadow of Section 108, which both sides are likely to point to as making the libraries’ uses more fair or less fair. But that’s a matter for a later date in the case; for now, the action, if it’s anywhere, is in associational standing and the scope of Section 108.

(Jonathan Band also has some discussion of the relationship between Section 108 and fair use in the context of the HathiTrust Orphan Works Program.)

UPDATE: I changed “Tea Party anti-library faction of the Authors Guild’s base” to “Tea Party anti-library faction of the author community” to make clearer that this is a statement about the beliefs of some authors, not the position of the Authors Guild itself.

Copyright Arbitrage in Action


Cross-posted from PrawfsBlawg

Meet Aereo, a new way to watch TV on the Internet. Aereo plans to capture over-the-air TV signals and stream them to customers in the New York area. Aereo’s low, low $12-a-month prices are made possible by the fact that it doesn’t pay licensing fees: Aereo insists that everything it’s doing is legal under copyright law because Aereo gives each user her own individual TV tuner. That’s right, Areeo is filling a Brooklyn office with thousands of TV antennas.

In any sane world, Aereo would not exist. There is no practical reason to use thousands of tiny antennas rather than a few good ones; reencoding the same signals again and again is pure waste. And sending these signals from Aereo’s premises to customers’ homes over the Internet is intensely silly, given that these customers already have the option of video service from their cable companies.

But our world is demonstrably insane; witness the Copyright Act. One-to-many retransmission are governed by the complex “retransmission consent” rules at the intersection of copyright and communications law. But one-to-one transmissions of the sort Aereo is making are arguably not “public performances” under the Second Circuit’s 2008 Cartoon Network decision. More antennas, less risk. Aereo is engaged in copyright arbitrage: it’s trying to stitch together a chain of explicitly legal acts until it reaches a result that would be infringing if done directly.

It’s hardly alone. ivi tried (and failed) to pull an Aereo by calling itself a “cable system” under Section 111 of the Copyright Act. ReDigi is trying to cobble together Cartoon Network and a few other precedents to make something that looks like digital first sale. Zediva tried to run this one in reverse: it filled a data center with DVD players in an attempt to bootstrap first sale rights (in the DVDs) into streaming video-on-demand. I could go on.

None of these businesses ought to exist. In a world where copyright and communications law worked cleanly, copyright owners would be licensing their works over efficient transmission paths directly to users. These technical workarounds would be unnecessary. Of course, this point can be taken in one of two ways, depending on whether you think these entrepreneurs are a second-best response to a legal system that makes arbitrary distinctions or taking unfair advantage of a legal system that makes arbitrary distinctions. But either way, their proliferation is an indication of just how badly the wheels are coming off the bus of copyright law’s conceptual framework.

GBS: The Class Certification Fight


In addition to associational standing, the other main issue currently in play in the principal Google Books case is class certification. After discussions about a new settlement fell apart, the Authors Guild moved to certify a class:

All persons residing in the United States who hold a United States copyright interest in one or more Books reproduced by Google as part of its Library Project, who are either (a) natural persons who are authors of such Books or (b) natural persons, family trusts or sole proprietorships who are heirs, successors in interest or assigns of such authors. “Books” means each full-length book published in the English language and registered with the United States Copyright Office within three months after its first publication.

Google filed its opposition to certification in early February, and it was followed shortly by a letter by the indefatigable Pamela Samuelson on behalf of eighty-two academic authors (and members of the potential class) also objecting to certification. Strikingly, the signers include both vocal opponents and vocal supporters of the now-rejected settlement. They’ve made common cause again.

While the lawsuit could in theory go forward even without the class, it would be far less viable in practice. The prospect of a huge financial recovery both gives the Authors Guild more leverage against Google and makes its lawyers more willing to work on a contingency basis. So fighting class certification is a no-lose proposition for Google: in the best case, the case goes away, and in the worst, it would still have to litigate the fair use issue anyway.

Google did one supremely clever thing: it spent $100,000 to hire an expert, Hal Poret, to survey authors and ask them what they think about Google Books and its effects on their sales:

The survey shows that fifty-eight percent of authors affirmatively approve of the inclusion of their books in snippet view; fourteen percent affirmatively oppose that inclusion; and twenty-eight percent neither approve nor disapprove. Id. at 14. Forty-five percent believe inclusion in snippet view helps sales of their books; four percent believe it harms those sales; and fifty-one percent believe it has no effect one way or the other. Id. Nineteen percent believe inclusion in snippet view advances their economic interests more generally; eight percent believe it harms those interests; and seventy-four percent believe it has no effect one way or the other. Id.

These findings raise, once again, the question of who speaks for authors. The Authors Guild wants to certify a class to prevent Google from indexing and snippeting authors’ books in general. But if over half of authors approve of snipping and almost half think it increases their sales, the Author’s Guild’s proposed litigation strategy starts to seem openly antagonistic to the interests of many class members. And to defeat class certification, it doesn’t need to show that the proposed representatives are completely different from all other class members, just that there is a “fundamental conflict” between some class members and others.

Complicating matters, both sides can argue that their proposed outcome doesn’t restrict the options of authors who disagree. The Authors Guild can quite rightly (and likely will) point out that authors who like having their books indexed can individually give Google permission to do so. Google can reply (and has on many occasions) that it lets authors opt out: it honors requests to have books removed from its index and snippet view. Thus, the issue comes down to the power of defaults: which authors need to fill out a web form, the ones who want to be indexed or the ones who don’t?

It’s here, I think, that the Samuelson letter makes its most telling points. As academics, the signers are regular users of Google Books: they find it valuable in their roles as authors. The Authors Guild and two of the three individual plaintiffs were already held to be inadequate class representatives for academics in Judge Chin’s opinion rejecting the settlement. And the academic authors have specific objections to the Authors Guild’s litigation strategy, such as its attempt to seek statutory damages. All of these go to the idea that a large group of authors would be specifically disadvantaged if the Authors Guild had its way and imposed a default of “no scanning.”

Google also reasserts at length two of the issues it raised when arguing that the Authors Guild lacks associational standing: diversity of ownership and diversity of fair use. Google argues that individual issues “predominate” over classwide ones in determining who’s a copyright owner with standing to sue for infringement. That, Google says, depends on the circumstances of a book’s creation and on the details of its publishing contracts, and spends several pages detailing the wide diversity of contracts in use in the publishing industry. (The Authors Guild supplied Google in discovery with an assortment of representative contracts, which Google uses to build its case.)

I’m not so sure about the argument from diversity of ownership. The irony here is rich but leaves a bitter taste in the mouth: the proposed settlement would have rewritten publishing contracts on a wholesale basis, but now those contracts are so diverse that they preclude class certification? And, more fundamentally, as the Supreme Court put it in Wal-Mart v. Dukes, quoting the late great Richard Nagareda:

What matters to class certification … is not the raising of common ‘questions’—even in droves—but, rather the capacity of a classwide proceeding to generate common answers apt to drive the resolution of the litigation.

Here, individual ownership is only really a problem at the relief stage, in determining who is entitled to a damage award or the benefit of an injunction. But for purposes of a lawsuit that tests the legality of Google Books, ownership of any specific book is largely irrelevant. That case can be litigated for whichever authors are members of the class without first needed to create an exhaustive list of them. That litigation will generate “common answers” about the legality of scanning, indexing, and snippetizing books.

Google’s second familiar argument is that fair use is inherently a case-by-case matter, which here means book-by-book. Some of the books are in print and some aren’t; snippets are proportionally much more of a shorter book than a longer one; some books are more factual and others are more creative; the economic effects will vary from book to book. These all strike me as matters of degree. Some practices—quoting a sentence as part of a review—are fair use on a class-wide basis. Some—printing complete copies for resale—are unfair on a class-wide basis. In between there will be at least some practices that are fair as to some books and unfair as to others.

In other words, the answer to “Can fair use be determined in a class action?” isn’t “yes” or “no,” but “tell me more.” It depends on what the fair use issues are. While in theory class certification motions are supposed to be about the class rather than about the merits of the case, some mixing is inevitable, as Wal-Mart recognizes. Given that I think Google is right about fair use, I also think that Google is wrong on class certification. The fairness of indexing can be determined in a class action, and the answer is yes, it’s fair use. If you disagree with me strongly enough, then you also think that diversity of fair use is no barrier to class certification. It’s only if you think that the fair use issues in this case are so close to evenly balanced that the court will need to proceed book-by-book that you ought to be opposed to certification on that ground.

I look forward to seeing the plaintiffs’ reply, which is due by April 3, and will blog about it when it comes in.