The Laboratorium
February 2012

This is an archive page. What you are looking at was posted sometime between 2000 and 2014. For more recent material, see the main blog at http://laboratorium.net

GBS: A Matter of Standing


In three of the four pending Google Books lawsuits—the authors’ and visual artists’ suits against Google and the authors’ suit against HathiTrust—the defendants have objected to standing. That is, these are lawsuits being brought by organizations on behalf of their members; Google and its library partners are arguing that the members themselves should be in court, not the organizations. Briefing is complete on these motions, so I thought I’d summarize briefly what the main arguments being thrown back and forth are. I’ll be focusing on the copyright-specific issues, rather than on associational standing in general.

The first major argument raised by the Google side is that the Copyright Act specifically prohibits associational standing. The key provision here is Section 501(b), which says that the “legal or beneficial owner of an exclusive right under a copyright is entitled” to sue. Google reads this at face value: an association isn’t the “owner” of its members’ copyrights, so can’t bring suit. The caselaw here is best described as “vague”: the most recent precedent, AIME v. UCLA, agrees with Google’s view, but isn’t binding on the federal courts in New York.

That’s a categorical argument. It would apply to all copyright cases. There are also a few arguments that these particular associations don’t have standing in these particular copyright cases. Each focuses on some piece of the case that allegedly requires “individual participation” by the copyright owners.

First, there’s the fair use issue. Fair use is famously “case by case” and “highly individualized.” Google claims that it necessarily raises factual issues that vary from book to book. Some books are highly factual, and some are more creative: fair use is easier to show for the former than the latter. Similarly, some books are in print, and others out of print, and this will affect the consideration of how Google Books is affecting the market for them. I’m skeptical of this objection—even if the fair-use case varies from book to book, it’s quite possible that some broad lumping (e.g. books in print and books out of print) will suffice. You don’t necessarily need to bring every author individually into court to decide whether, say, snippet display of fiction is or isn’t fair use.

Next, there’s ownership. Google’s argument here is that proving which copyrights are owned by the associations’ members will require detailed case-by-case inquiry, given the huge diversity of publishing arrangements. (Google very effectively uses copyright guides published by the associations themselves to illustrate how complex book copyright licensing is.) This requires both inspection of particular contracts and complex legal interpretaion of various provisions. Whether or not this argument is sufficient to defeat standing, it’s worth reading pages 14 to 19 of Google’s brief, which is a good snapshot of just how tangled the e-rights situation for books has become. This is the best-argued part of Google’s briefs, but it also depends on some contestable claims about licensing practices in the publishing industry.

And finally, in the HathiTrust lawsuit, the libraries argue that any claims directed to the Orphan Works Project are premature. Because (so say the libraries) none of the plaintiffs are at any risk of having their books displayed, none of them have standing to object to it. Here, I think that HathiTrust’s sloppiness about orphan candidates will have real legal consequence for it. The past near-misses make it more plausible that other non-orphans owned by Authors Guild members will slip through. HathiTrust is probably right when it argues that it’s premature to object to any revised orphan works plan, whose details it hasn’t even settled on. But that shouldn’t stop the associations from getting a ruling on the legality of the original Orphan Works Project (assuming they pass the other standing hurdles).

These issues are on a relatively quick track in the HathiTrust lawsuit, with oral argument currently scheduled for March 2 before Judge Baer. It’s unclear whether they’ll be on an equally fast track in the lawsuits against Google, but my guess is not. Instead, Judge Chin will want to wait to finish the briefing on the authors’ motion to certify a class action before ruling on the standing and class certification motions together.

Stoner Law-Reform: The Self-Informing Jury


Cross-posted from PrawfsBlawg

Like I said last time, my inner stoner hates jury duty. Another reason is that he’s a bit of a conspiracy theorist; he really bugs out when he thinks people in power are hiding something from him. So you can imagine how he’s been reacting to stories about jurors being dismissed for looking things up on Wikipedia and doing other online research. I tried to explain that it was about ensuring a fair trial, but he wasn’t interested. The way he sees it, for a system that supposedly thinks the jury is smart about ferreting out the truth, we sure don’t trust its judgment very much. And trying to stanch the flood of the Internet is the very definition of a losing battle. So, he asks, why not encourage the jury to do its own research?

It used to be, as Mark Spottswood observed last time, that the jury was self-informing. It was summoned because jurors actually knew what had happened; they came to court to give evidence, not to receive it. That system went by the wayside as jurors stopped having personal knowledge of events, and then the lawyers took over everything and the ideal jury shifted from being merely neutral to being actively ignorant. The modern treatment of the jury is honorific in theory but contemptuous in practice. Rules of evidence are designed to conceal from the jury any information that hasn’t been pre-masticated into flavorless cud by the attorneys; trial procedure reacts with horror to jurors who know something useful or have well-formed opinions on anything relevant.

The Internet age, though, gives us a chance for a do-over. Instead of trying ever harder to stuff blank-slate jurors into informational Faraday cages, how about we embrace the idea that jurors know what they’re doing? Jurors could ask questions of witnesses and do their own research. If they want to go to the crime scene on their own and take a look around, let them. If they need to consult a dictionary to figure out what the words the judge used in the jury instructions mean, let them. Honestly, if the jurors are surfing Wikipedia, they’re probably doing about as well, if not better, than they’re getting from at least one side’s expert witness. And really, if the jurors are going home at night and surfing Wikipedia, how often do you think we’re going to catch them at it?

Crazy, or so crazy it might just work?

Stoner Law Reform: Trial by DVD


Cross-posted from PrawfsBlawg

I’m not a fan of the jury system for any reason other than as a check on government power. Even leaving aside the jury’s fact-finding competence, it has a baleful influence on trial structure. Jury trial is concentrated trial: all the lawyers, witnesses, and evidence converge on the courtroom for a one-shot high-stakes live battle. Once the trial starts, there’s no going back to the reasoned deliberation of motion practice. Judges have to make evidentiary rulings on the fly; lawyers work themselves to exhaustion; jurors put the rest of their lives on hold indefinitely. And the pretrial stage swells to ridiculous proportions (especially discovery), because neither side wants to be caught unprepared for an unpleasant surprise at trial. Jury trial is an adversarial system in which the adversaries both operate under severe handicaps that make it hard for them to present their best arguments.

I asked my inner stoner about the role of the jury. He hates jury duty: he says trials are boring and it’s hard to bring weed into the courthouse. I told him that jury duty isn’t going away, not until we rewrite Article III and the Fifth, Sixth, and Seventh Amendments and their state equivalents. So he said, If we can’t get rid of the jury, can we get rid of the trial? I asked him to explain, and he said he likes watching movies, so put the evidence on a DVD and play that for the jury.

Under the trial-by-DVD system, pretrial motion practice wouldn’t just be directed at winnowing down the issues for a trial. It would actually produce the precise set of evidence to be submitted to the jury. All of the evidentiary rulings—every objection as to form and request to strike—would already have been aired and resolved. Then, and only then, would a jury be sworn in. A courtroom deputy would sit with the jury while they watched the DVDs, the judge would give them their instructions, and they’d deliberate as usual. The trial itself would be far more efficient without the sidebars and other frou-frou. Perhaps surprisingly, so would the pretrial. Instead of having to prepare for anything the other side could possibly throw at them, the lawyers would only need to respond to those things the other side actually did throw at them.

And that’s just the beginning. Stop thinking of the trial as theater; start thinking of it as a movie. The judge and parties would be able to edit the DVD tightly. If the plaintiff’s lawyer realized that a cross-examination hadn’t gone anywhere useful, she could just excise it from the testimony she offered. The parties could draw far more freely on documents, depositions, expert reports, demonstrative exhibits, and other sources of evidence to make their cases clearly, rather than needing to filter everything through someone in the witness box droning on endlessly. And the judge could easily issue appropriate rulings as the parties assembled their evidence, granting partial or total directed verdicts that narrowed or eliminated the need for a trial entirely. Think of it as picking up the logistical benefits of inquisitorial trial within a system that remains broadly adversarial.

Some states have experimented with the use of pre-recorded testimony. But, to my knowledge, none have ever used the opportunity of pre-recording to rethink from the start what a “trial” and a “pretrial” actually are. Given that our system treats them jurors as children who are to be seen and not heard, it’s not clear what real value there is in having them in the same room as the witnesses at the same time. If we’re committed to keeping the jury, why not use their time effectively?

Crazy, or so crazy it might just work?

Stoner Law Reform: Fee-Shifting


Cross-posted from PrawfsBlawg

This week, I’m going to post some stoner law-reform proposals. Sometimes, you need to remove your own common sense to imagine how the world might be different. And what better way to do that than stoner logic?

First up, consider fee-shifting. Critics complain, and rightly so, that the American rule of each side bearing its own costs is bad for plaintiffs with good claims. They may find it too expensive to vindicate their rights. But while we’ve picked up a variety of fee-shifting statutes here and there, we’ve stubbornly resisted the English rule, in which the losing party must pay for the winning party’s lawyers. Critics complain, and rightly so, that the English rule encourages overspending and can put unbearable pressure on parties facing a well-financed opponent.

I asked my inner stoner, and he said, “What if the loser pays its own fees to the winner?” In essence, this rule means that the loser ends up paying double its attorneys fees: once to its own lawyers, and once to the winner. The English rule tries to make the plaintiff whole. But that’s really hard to get right and it creates weird incentives. A stoner would rather just charge the loser what it paid, call it close enough, and order some pizza.

I think he might have a point. Where the parties are equally matched and spending evenly, loser-pays-double rule is ex post equivalent to the English rule. But ex ante, it dials up the incentive to get the lawsuit done cheaply. Where the parties are mismatched, loser-pays-double looks even better. A pro se party up against a behemoth faces no risk of a crushing fee award. Its wealthy opponent knows that every dollar spent on intimidation only increases the little guy’s potential payday. Loser-pays-double also answers the criticism that the English rule can result in wholly disproportionate fee awards: a party’s potential fee payout is never more again than it has already spent.

Crazy, or so crazy it might just work?

Rogue Programmers


Cross-posted from PrawfsBlawg

In early 2010, Google apologized for the way Google Buzz had revealed people’s Gmail contacts to the world. Later that year, the company announced that its Street View cars had been recording the data being transmitted over WiFi networks they drove by. And just this week, the Wall Street Journal and privacy researcher Jonathan Mayer revealed that Google had been using cookies in a way that directly contradicted what it had been telling users to do if they didn’t want cookies.

Once is an accident, and twice a coincidence, but three times is a sign of a company with a compliance problem. All three of these botches went down the same way. A Google programmer implemented a feature with obvious and serious privacy implications. The programmer’s goal in each case was relatively innocuous. But in each case he or she designed the feature in a way that had the predictable effect of handing people’s private information in a way that blatantly violated the company’s purported privacy principles. Then—and this is the scary part—Google let the feature ship without noticing the privacy time bomb it contained.

When it comes to privacy, this is a company out of control. Google’s management is literally not in control of the company. Especially given its past mistakes, Google’s legal team know that privacy compliance is critically important: witness the extensive effort lavished on its new forthcoming privacy policy. And yet they have been unable, time and time again, to keep privacy blunders affecting millions of users from getting out the door.

Google was founded and is run as an engineering-driven company, which has given it amazing vitality and energy and the ability to produce world-changing products. But even as the company has become a dominant powerhouse on which hundreds of millions of people depend, it continues to insist that it can run itself as a freewheeling scrum because, er, um, Google is special, Google’s values are better than the competition’s, and Google employees are smarter than your average bear. All of these may be true, but adult companies have adult responsibilities, and one of them is to train and supervise their employees. Google is stuck in a perpetual adolescence, and it’s getting old fast.

The only other firms I can think of with this kind of sustained inability to make their internal controls stick are on Wall Street. (See, e.g.) Google has already had to pay out a $500 million fine for running advertisements for illegal pharmaceutical imports. And the company is already operating under a stringent consent decree with the FTC from the Buzz debacle. If those weren’t sufficient to convince Larry Page to put his house in order, it’s hard to know what will be. Sooner or later, the company will unleash on the Internet a piece of software written by the programmer equivalent of a Jérôme Kerviel or a Kweku Adoboli and it won’t be pretty, for the public or for Google.

Experiments (and Surveys) in Casebook Pricing


Michael Froomkin is using my casebook in his Internet Law class this semester. He was curious how his students were taking to the pay-what-you-want model, so he asked them (anonymously) what they paid. The results are interesting, and encouraging. A majority of students paid the $30 sticker price, and the average price across the whole class was $21.19. I’m very happy that his students are finding the book useful enough that they think it’s fair to pay for it.

How Law Responds to Complex Systems


Cross-posted from Concurring Opinions

In my first post on A Legal Theory for Autonomous Artificial Agents, I discussed some of the different kinds of complex systems law deals with. I’d like to continue by considering some of the different ways law deals with them.

Chopra and White focus on personhood: treating the entity as a single coherent “thing.” The success of this approach depends not just on the entity’s being amenable to reason, reward, and punishment, but also on it actually cohering as an entity. Officers’ control over corporations is directed to producing just such a coherence, which is a good reason that personhood seems to fit. But other complex systems aren’t so amenable to being treated as a single entity. You can’t punish the market as a whole; if a mob is a person, it’s not one you can reason with. In college, I made this mistake for a term project: we tried to “reward” programs that share resources nicely with each other by giving them more time to execute. Of course, the programs were blithely ignorant of how we were trying to motivate them: there was no feedback loop we could latch on to.

Another related strategy is to find the man behind the curtain. Even if we’re not willing to treat the entity itself as an artificial person, perhaps there’s a real person pulling the levers somewhere. Sometimes it’s plausible, as in the Sarbanes-Oxley requirement that CEOs certify corporate financial statements. Sometimes it’s wishful thinking, as in the belief that Baron Rothschild and the Bavarian Illuminati must be secretly controlling the market. This strategy only works to the extent that someone is or could be in charge: one of the things that often seems to baffle politicians about the Internet is that there isn’t anyone with power over the whole thing.

A subtle variation on the above is to take hostages. Even if the actual leader is impossible to find or control, just grab someone the entity appears to care about and threaten them unless the entity does what you want. This used to be a major technique of international relations: it was much easier to get your hands on a few French nobles and use them as leverage than to tell France or its king directly what to do. The advantage of this one is that it can work even when the entity isn’t under anyone’s control at all: as long as its constituent parts share the motivation of not letting the hostage come to harm, they may well end up acting coherently.

When that doesn’t work, law starts turning to strategies that fight the hypothetical. Disaggregation treats the entities as though it doesn’t exist — i.e., has no collective properties. Instead, it identifies individual members and deals with their actions in isolation. This approach sounds myopic, but it’s frequently required by a legal system committed to something like methodological individualism. Rather than dealing with the mob as a whole, the police can simply arrest any person they see breaking a window. Rather than figuring out what Wikipedia is or how it works, copyright owners can simply sue anyone who uploads infringing material. Sometimes disaggregation even works.

Even more aggressively, law can try destroying the entity itself. Disperse the mob, cancel a company’s charter, or conquer a nation and dissolve its government while absorbing its people. These moves have in common their attempt to stamp out the complex dynamics that give rise to emergent behavior: smithereens can, after all, be much easier to deal with. Julian Assange’s political theory actually operates along these lines: by making it harder for them to communicate in private, he hopes to keep governmental conspiracies from developing entity-level capabilities. For computers, there’s a particularly easy entity-destroying step: the off switch. Destruction is recommended only for bathwater that does not contain babies.

When law is feeling especially ambitious, it sometimes tries dictating the internal rules that govern the entity’s behavior. Central planning is an attempt to take control of the capriciousness of the market by rewiring its feedback loops. (On this theme, I can’t recommend Spufford’s quasi-novel Red Plenty highly enough.) Behavior-modifying drugs take the complex system that is an individual and try to change how it works. Less directly, elections and constitutions try to give nations healthy internal mechanisms.

And finally, sometimes law simply gives up in despair. Consider the market, a system whose vindictive and self-destructive whims law frequently regards with a kind of miserable futility. Or consider the arguments sometimes made about search engine algorithms — that their emergent complexity passeth all understanding. Sometimes these claims are used to argue that government shouldn’t regulate them, and sometimes to argue that even Google’s employees themselves don’t fully understand why the algorithm ranks certain sites the way it does.

My point in all of this is that personhood is hardly inevitable as an analytical or regulatory response to complex systems, even when they appear to function as coherent entities. For some purposes, it probably is worth thinking of a fire as a crafty malevolent person; for others, trying to dictate its internals by altering the supply of flammables in its path makes more sense. (Trying to take hostages to sway a fire is not, however, a particularly wise response.) Picking the most appropriate legal strategy for a complex system will depend on situational, context-specific factors — and upon understanding clearly the nature of the beast.

UPDATE: Whoops. I mistakenly posted an earlier draft. The whole thing is here now.

Complex Systems and Law


I’m also guest-blogging this week at Concurring Opinions, as part of a symposium on Samir Chopra and Laurence White’s A Legal Theory for Autonomous Artificial Agents.

The basic question LTAAA asks—how law should deal with artificially intelligent computer systems (for different values of “intelligent”)—can be understood as an instance of a more general question—how law should deal with complex systems? Software is complex and hard to get right, often behaves in surprising ways, and is frequently valuable because of those surprises. It displays, in other words, emergent complexity. That suggests looking for analogies to other systems that also display emergent complexity, and Chopra and White unpack the parallel to corporate personhood at length.

One reason that this approach is especially fruitful, I think, is that an important first wave of cases about computer software involved their internal use by corporations. So, for example, there’s Pompeii Estates v. Consolidated Edison, which I use in my casebook for its invocation of a kind of “the computer did it” defense. Con Ed lost: It’s not a good argument that the negligent decision to turn off the plaintiff’s power came from a computer, any more than “Bob the lineman cut off your power, not Con Ed” would be. Asking why and when law will hold Con Ed as a whole liable requires a discussion about attributing particular qualities to it—philosophically, that discussion is a great bridge to asking when law will attribute the same qualities to Con Ed’s computer system.

But corporations are hardly the only kind of complex system law must grapple with. Another interesting analogy is nations. In one sense, they’re just collections of people whose exact composition changes over time. Like corporations, they have governance mechanisms that are supposed to determine who speaks for them and how, but those mechanisms are subject to a lot more play and ambiguity. “Not in our name” is a compelling slogan because it captures this sense that the entity can be said to do things that aren’t done by its members and to believe things that they don’t.

Mobs display a similar kind of emergent purpose through even less explicit and well-understood coordination mechanisms. They’re concentrated in time and space, but it’s hard to pin down any other constitutive relations. Those tipping points, when a mob decides to turn violent, or to turn tail, or to take some other seemingly coordinated action, need not emerge from any deliberative or authoritative process that can easily be identified.

In like fashion, Wikipedia is an immensely complicated scrum. Its relatively simple software combines with a baroque social complexity to produce a curious beast: slow and lumbering and oafish in some respect, but remarkably agile and intelligent in others. And while “the market” may be a social abstraction, it certainly does things. A few years ago, it decided, fairly quickly, that it didn’t like residential mortgages all that much—an awful lot of people were affected by that decision. The “invisible hand” metaphor personifies it, as does a lot of econ-speak: these are attempts to turn this complex system into a tractable entity that can be reasoned about, and reasoned with.

As a final example of complex systems that law chooses to reify, consider people. What is consciousness? No one knows, and it seems unlikely that anyone can know. Our thoughts, plans, and actions emerge from a compelx neurological soup, and we interact with groups in complex social ways (see above). And yet law retains a near-absolute commitment to holding people accountable, rather than amygdalas. By taking an intentional stance towards agents, Chopra and White recognize that law sweeps all of these issues under the carpet, and ask when it becomes plausible to sweep those issues under the carpet for artificial agents, as well.

The Worst Part of Copyright: Termination of Transfers


Over at PrawfsBlawg, I’ve been holding a survey on the worst provision in the Copyright Act. This was my explanation of my own choice.

There were some great responses to my survey about the worst provision in the Copyright Act. Bruce Boyden nailed it when he guessed I was thinking about termination of transfers. This rule lets authors revoke any licensing contract between 35 and 40 years after they enter into it. (There was a similar but different system for renewals under the 1909 Act, which also survives in modified form in the 1976 Act, just to add to the confusion.)

This is an inalienability rule. But it’s not an inalienability rule that rests on a deep and shared moral intuition, like the rule prohibiting people from selling their organs as meat for the super-rich. Termination of transfers rests instead on a view that authors are “congenitally irresponsible” to the point that they can’t be trusted to make licensing decisions for themselves. They need to be given a second bite at the apple because they’re not smart enough to negotiate fair deals the first time around. As for the theory that it’s hard to value creative works up front, apparently percentage royalties and reversion clauses are too complex for authors to understand or insist on.

Trying to impose an inalienability rule on authors and publishers who don’t want it at the time they strike their original licensing deals leads to no end of practical trouble. Making the rule stick means overriding any number of contracts, including contracts specifically drafted to get around it. Litigation over decades-old agreements, frequently with intervening modifications and regrants, is virtually guaranteed to be a morass—and so it has been, with well-publicized disputes like the fight over the termination rights in Action Comics #1 dragging on for years at ridiculous expense. The courts have been fighting against this system for much of the century, but all they’ve really accomplished is to increase its complexity. And Congress has done its part to make the statute incomprehensible: I dare you to read Section 203(b) and explain what it’s supposed to mean.

But the demented logic of inalienability doesn’t stop there: it continues beyond the grave. The termination rights of a deceased author vest in the widow or widower, then the children, and then the grandchildren, on a per stirpes basis. That’s right: the Copyright Act displaces state probate law by creating future estates. And it does so in the form of byzantine set of fractional shares subject to an idiosyncratic voting rule requiring a majority of majorities to exercise the termination right. (Need I add that the drafters of the Uniform Probate Code concluded that a vast majority of Americans wouldn’t want per stirpes distribution if they understood how it worked? No. That would be overkill.)

The underlying assumptions behind this postmortem provision are creepy, too. The romantic author, it would appear, is both the family breadwinner and a bad provider. His family, having sacrificed for decades to support his creative efforts, will receive their reward after his passing, when his genius is belatedly recognized. Copyright law has a theory of the family: it’s nuclear and dominated by a single individual on whom the rest depend. The statutory text is gender-neutral, but its assumptions aren’t.

As an incentive for authorship, this a terrible one. If authors make bad up-front deals because they’re unmindful of future revenues, it follows that those same future revenues won’t operate as an ex ante incentive for creativity. As a welfare system to support deserving authors in their old age, it’s also terrible, since it bestows large windfalls on a very small number of them, at immense administrative cost. If this is a welfare system to support the families of authors, it’s beyond terrible, since it bestows windfalls on a small number of people with the good fortune to be related to a commercially successful author, while doing nothing for the families of those who toiled their whole lives in some other, equally worthy calling.

There is, I recognize, essentially zero chance that this system will be modified for the better any time soon. But that doesn’t mean we have to like it.

Coasean Positioning System


Cross-posted from PrawfsBlawg

Ronald Coase’s theory of reciprocal causation is alive, well, and interfering with GPS. Yesterday, the FCC pulled the plug on a plan by LightSquared to build a new national wireless network that combines cell towers and satellite coverage. The FCC went along with a report from the NTIA that LightSquared’s network would cause many GPS systems to stop working, including the ones used by airplanes and regulated closely by the FAA. Since there’s no immediately feasible way to retrofit the millions of GPS devices out in the field. LightSquared had to die so that GPS could live.

LightSquared’s “harmful interference” makes this sound like a simple case of electromagnetic trespass. But not so fast. LightSquared has had FCC permission to use the spectrum between 1525 and 1559 megahertz, in the “mobile-satellite spectrum” band. That’s not where GPS signals are: they’re in the next band up, the “radionavigation satellite service” band, which runs from 1559 to 1610 megahertz. According to LightSquared, its systems would be transmitting only in its assigned bandwidth—so if there’s interference, it’s because GPS devices are listening to signals in a part of the spectrum not allocated to them. Why, LightSquared plausibly asks, should it have a duty of making its own electromagnetic real estate safe for trespassers?

The underlying problem here is that “spectrum” is an abstraction for talking about radio signals, but real-life uses of the airwaves don’t neatly sort themselves out according to its categories. In his 1959 article The Federal Communications Commission, Coase explained:

What does not seem to have been understood is that what is being allocated by the Federal Communications Commission, or, if there were a market, what would be sold, is the right to use a piece of equipment to transmit signals in particular way. Once the question is looked at in this way, it is unnecessary to think in terms of ownership of frequencies of the ether.

Now add to this point Coase’s observation about nuisance: that the problem can be solved either by the polluter or the pollutee altering its activities, and so in a sense should be regarded as being caused equally by both of them. So here. “Interference” is a property of both transmitters and receivers; one man’s noise is another man’s signal. GPS devices could have been designed with different filters from the start, filters that were more aggressive in rejecting signals from the mobile-satellite band. But those filters would have added to the cost of a GPS unit, and worse, they’d have degraded the quality of GPS reception, because they would have thrown out some of the signals from the radionavigation-satellite band. (The only way to build a completely perfect filter is to make it capable of traveling back in time. No kidding!) Since the mobile-satellite band wasn’t at the time being used anywhere close to as intensively as LightSquared now proposes to use it, it made good sense to build GPS devices that were sensitive rather than robust.

There are multiple very good articles on property, tort, and regulatory lurking in this story. There’s one on the question Coase was concerned with: regulation versus ownership as means of choosing between competing uses (like GPS and wireless broadband). There’s another on the difficulty of even defining property rights to transmit, given the failure of the “spectrum” abstraction to draw simple bright lines that avoid conflicting uses. There’s one on the power of incumbents to gain “possession” over spectrum not formally assigned to them. There’s another on investment costs and regulatory uncertainty: LightSquare has already launched a billion-dollar satellite. And there’s one on technical expertise and its role in regulatory policy. Utterly fascinating.

ReDigi and the Purpose of First Sale


Cross-posted from PrawfsBlawg

For now, at least, ReDigi lives. Judge Sullivan denied the preliminary injunction, but according to the transcript, on irreparable harm grounds rather than a lack of likelihood of success on the merits. The case is set for rapid progress towards trial, quite possibly on stipulated facts.

I’d like to take up one of the central questions in the case: first sale. Whether you think ReDigi ought to win certainly turns on your view of what first sale is for. So too, may the legal merits. How you interpret statutory text like “owner” or “sell” may depend on on your theory of what kinds of transfers Congress meant to protect. And even if ReDigi’s particular form of transfer falls outside of the text of first sale itself, the arguments for and against fair use can draw on first sale principles. Here, then, are some competing theories:

  • Conservation of copies: Copyright is fundamentally copy-right: the ability to prevent unauthorized copying. Practices that don’t increase the total number of copies in existence don’t fundamentally threaten the copyright owner’s core interests. First sale blesses one of those practices: moving a copy for which the copyright owner has already been paid from one set of hands to another. On this theory, ReDigi is okay because it forces sellers to delete their copy of the music, thereby keeping the number of extant copies constant.

  • Freedom of alienation: First sale protects the rights of owners of personal property against copyright claims that might interfere with their right to use their property as they wish. This idea is sometimes described in terms of “servitudes on chattels” or “exhaustion” of the copyright owner’s rights. We could also think of it as a negotiability regime promoting free transferability of personal property, given the information and transaction costs involved in allowing third-party copyright claims. On this theory, ReDigi is in trouble because it deals in information, rather than in tangible objects.

  • Copyright balancing: First sale is one of a cluster of doctrines that shape the level of control copyright owners have over the market (economic and cultural) for their works. If that balance changes over time, the doctrines should be recalibrated to restore it. Since the reproduction right has expanded to cover all sorts of computer-based uses such as loading a file into memory, the first sale defense should expand to maintain the same rough level of control. On this theory, ReDigi should win, because it would preserve roughly the same levels of freedom for users and control for owners as they had in an analog era.

  • Copyright balancing: Or wait … if the goal is balancing, then perhaps ReDigi should lose. First sale used to be practically restricted by the facts that physical copies wear out and that exchanging physical objects takes time and money. ReDigi would blow those practical limits away, disrupting the first sale balance in the direction of too little control for copyright owners. In the face of rampant illegal file-sharing, why should a court, in effect, legalize the process by allowing ReDigi to serve as a super-low-friction intermediary?

What I love about this case is that it pushes and pulls our intuitions about copyright in so many different directions. It brings up fundamental questions not just about unsettled corners of doctrine, but also about what copyright is for. It offers grist for every mill, food for every kind of thought.

The Used CD Store Goes Online


Cross-posted from PrawfsBlawg

On Monday, Judge Sullivan of the Southern District of New York will hear argument on a preliminary injunction motion in Capitol Records v. ReDigi, a copyright case that could be one of the sleeper hits of the season. ReDigi is engaged in the seemingly oxymoronic business of “pre-owned digital music” sales: it lets its customers sell their music files to each other. Capitol Records, unamused, thinks the whole thing is blatantly infringing and wants it shut down, NOW.

There are oodles of meaty copyright issues in the case — including many that one would not think would still be unresolved at this late date. ReDigi is arguing that what it’s doing is protected by first sale: just as with physical CDs, resale of legally purchased copies is legal. Capitol’s counter is that no physical “copy” changes hands when a ReDigi user uploads a file and another user downloads it. This disagreement cuts to the heart of what first sale means and is for in this digital age. ReDigi is also making a quiver’s worth of arguments about fair use (when users upload files that they then stream back to themselves), public performance (too painfuly technical to get into on a general-interest blog), and the responsibility of intermediaries for infringements initiated by users.

I’d like to dwell briefly on one particular argument that ReDigi is making: that what it is doing is fully protected under section 117 of the Copyright Act. That rarely-used section says it’s not an infringement to make a copy of a “computer program” as “an essential step in the utilization of the computer program.” In ReDigi’s view, the “mp3” files that its users download from iTunes and then sell through ReDigi are “computer programs” that qualify for this defense. Capitol responds that in the ontology of the Copyright Act, MP3s are data (“sound recordings,” to be precise), not programs.

I winced when I read these portions of the briefs. In the first place, none of the files being transferred through ReDigi are MP3s. ReDigi only works with files downloaded from the iTunes Store, and the only format that iTunes sells in is AAC (Advanced Audio Coding), not MP3. It’s a small detail, but the parties’ agreement to a false “fact” virtually guarantees that their error will be enshrined in a judicial opinion, leading future lawyers and courts to think that any digital music file is an “MP3.”

Worse still, the distinction that divides ReDigi and Capitol — between programs and data — is untenable. Even before there were actual computers, Alan Turing proved that there is no difference between program and data. In a brilliant 1936 paper, he showed that any computer program can be treated as the data input to another program. We could think of an MP3 as a bunch of “data” that is used as an input to a music player. Or we could think of the MP3 as a “program” that, when run correctly, produces sound as an output. Both views are correct — which is to say, that to the extent that the Copyright Act distinguishes a “program” from any other information stored in a computer, it rests on a distinction that collapses if you push too hard on it. Whether ReDigi should be able to use this “essential step” defense, therefore, has to rest on a policy judgment that cannot be derived solely from the technical facts of what AAC files are and how they work. But again, since the parties agree that there is a technical distinction and that it matters, we can only hope that the court realizes they’re both blowing smoke.

Copyright and the Romantic Video Game Designer


This month, I’m guest-blogging at PrawfsBlawg. I’ll be cross-posting many of my Prawfs posts here, as well.

My friend Dave is a game designer in Seattle. He and his friends at Spry Fox made an unusually cute and clever game called Triple Town. It’s in the Bejeweled tradition of “match-three” games: put three of the same kind of thing together and they vanish in a burst of points. The twist is that in Triple Town, matching three pieces of grass creates a bush; matching three bushes creates a tree … and so on up to floating castles. It adds unusual depth to the gameplay, which requires a combination of intuitive spatial reasoning and long-term strategy. And then there are the bears, the ferocious but adorable bears. It’s a good game.

Triple Town / Yeti Town screenshot

Now for the law. Spry Fox is suing a competing game company, 6waves Lolapps, for shamelessly ripping off Triple Town with its own Yeti Town. And it really is a shameless ripoff: even if the screenshots and list of similarities in the complaint aren’t convincing, take it from me. I’ve played them both, and the only difference is that while Triple Town has cute graphics and plays smoothly, Yeti Town has clunky graphics and plays like a wheelbarrow with a dented wheel.

I’d like to come back to the legal merits of the case in a subsequent post. (Or perhaps Bruce Boyden or Greg Lastowka will beat me to it.) For now, I’m going to offer a few thoughts about the policy problems video games raise for intellectual property law. Games have been, if not quite a “negative space” where formal IP protection is unavailable, then perhaps closer to zero than high-IP media like movies and music. They live somewhere ambiguous on the spectrum between “aesthetic” and “functional”: we play them for fun, but they’re governed by deterministic rules. Copyright claims are sometimes asserted based on the way a game looks and sounds, but only rarely on the way it plays. That leads to two effects, both of which I think are generally good for gamers and gamemakers.

On the one hand, it’s well established that literal copying of a game’s program is copyright infringement. This protects the market for making and selling games against blatant piracy. Without that, we likely wouldn’t have “AAA” titles (like the Grand Theft Auto series), which have Hollywood-scale budgets and sales that put Hollywood to shame. Video games have become a major medium of expression, and it would be hard to say we should subsidize sculpture and music with copyright, but not video games. Spry Fox would have much bigger problems with no copyright at all.

On the other hand, the weak or nonexistent protection for gameplay mechanics means that innovations in gameplay filter through the industry remarkably quickly. Even as the big developers of AAA titles are (mostly) focusing on delivering more of the same with a high level of polish, there’s a remarkable, freewheeling indie gaming scene of stunning creativity. (For some random glimpses into it, see, e.g. Rock, Paper, Shotgun, Auntie Pixelante, and the Independent Games Festival.) If someone has a clever new idea for a way to do something cute with jumping, for example, it’s a good bet that other designers will quickly find a way to do something, yes, transformative, with the new jumping mechanic. Spry Fox benefited immeasurably from a decade’s worth of previous experiments in match-three games.

The hard part is the ground in between, and here be knockoffs. Without a good way to measure nonliteral similarities between games, the industry has developed a dysfunctional culture of copycattery. Zynga (the creator of Farmville and Mafia Wars) isn’t just known for its exploitative treatment of players or its exploitative treatment of employees, but also for its imitation-based business model. Game developers who sell through Apple’s iOS App Store are regularly subjected to the attack of the clones. In Spry Fox’s case, at least, it’s easy to tell the classic copyright story. 6waves is reaping where it has not sown, and if Triple Town flops on the iPhone because Yeti Town eats its lunch, at some point Dave and his colleagues won’t be able to afford to spend their time writing games any more.

This is something I’ve been thinking about the copyright tradeoff recently. One way of describing copyright’s utilitarian function is that it provides “incentives to produce creative works.” That summons up an image of crassly commercial authors who scribble for a paycheck. In contrast, we sometimes expect that self-motivated authors, who write for the pure fun of it, will thrive best if copyright takes its boot off their necks. But a better picture, I think, is that there are plenty of authors who are motivated both by their desire to be creative and also by their desire not to be homeless. The extrinsic motivations of a copyright-supported business model provide an “incentive,” to be sure, but that incentive takes the form of allowing them to indulge their intrinsic motivations to be creative. In broad outline, at least, that’s how we got Triple Town.

I’m not sure where the right place to draw the lines for copyright in video games is. I’m not sure that redrawing the lines wouldn’t make things worse for the Daves of the world: giving them more greater rights against the 6waves might leave them open to lawsuits from the Zyngas. But I think Triple Town’s story captures, in miniature, some of the complexities of modern copyright policy.