The Laboratorium
July 2009

This is an archive page. What you are looking at was posted sometime between 2000 and 2014. For more recent material, see the main blog at http://laboratorium.net

Off the Hook


Late notice, I know, but I should be appearing on Off the Hook tonight to talk about the Google Book Search settlement, plus maybe the AP and DRM and who knows what else. In the New York area, that’s WBAI 99.5 FM at 7:00, or listen live online.

GBS Blogging: Deep Thought


Would libraries have been more on board with the settlement if it didn’t include the Public Access Service? Sure, giving libraries free terminals seems like a gift, but it’s a gift horse the libraries have been more than willing to look in the mouth. The “one per building” rule has become a lightning rod for criticism; it focuses attention on the possibility of more.

GBS Blogging: We Are Live at the ACS Blog


James Grimmelmann, The Google Book Search Settlement: Questions Remain, ACS Blog:

David Balto’s reply to my Issue Brief on the proposed Google Book Search settlement is careful and thoughtful. Unfortunately, it gets some of its analysis of the settlement’s anticompetitive effects wrong. I’ll respond to three of the points on which I believe he’s mistaken. …

Yeah, That’s an Inconspicuous Alias


Steve Friess and Solomon Moore, Police Search Home of Jackson’s Doctor, New York Times, July 28, 2009:

The warrants granted the police the authority to seize any prescriptions or records listed under several suspected aliases for Mr. Jackson, including Bryan Singleton, Roselyn Muhammad, Faheem Muhammad, Fernand Diaz, Peter Madonie, Kai Chase and Josephine Baker.

Josephine Baker?

GBS Blogging: The Fair Use Counterfactual


Matthew Sag joins the scholarly discussion of the Google Book Search case with this draft:

I argue that the fair use issues in relation to the Google Book Search Library Project have been largely misunderstood. Although Google had a very strong set of arguments relating to fair use, it was not likely to receive the courts unqualified approval for its massive digitization effort. Instead, the most likely outcome of the litigation was that book digitization would qualify as a fair use so long as copyright owners were given the opportunity to opt out of inclusion in the book database and that opportunity was made freely available at a cost that was essentially trivial.

From this perspective, the terms of the settlement did not differ significantly from the most likely outcome of the litigation. Essentially, the opt out that fair use would likely have required has been replaced by the ability of copyright owners to opt out of the class-action settlement and the significant opt-out and modification opportunities within the settlement itself.

I think this is an incomplete story; there’s a lot in the shape of the Consumer Purchase and Institutional Subscription programs that couldn’t have been predicted just from reading the fair use tea leaves. But Sag does a nice job connecting the dots of the parts of the story he does tell. He shows that the endless debates from the last four years over whether Google Book Search was a fair use aren’t moot in a post-settlement world; we see their shadows in the settlement itself. In the article, he also drops this tantalizing hint:

Peter DiCola of Northwestern University and I are working on a separate article in relation to the Google Book Settlement.

I await it eagerly.

Jason Scott Archiving 10 Million Expired 4Chan Threads


The fast-moving image-boards at 4chan are anonymous, transient, nasty, and insanely fecund. Now Jason Scott, rogue archivist, has gotten his hands on dozens of gigabytes of expired threads—that’s five years worth of raw Internet ectoplasm—and is uploading them to the Internet Archive.

If I were teaching Internet Law this fall, this would be a final exam question. Your answer should discuss the privacy, copyright, and defamation issues, but need not be limited to them. Be sure to reread section 230 carefully.

Social Security Numbers Are Not Secure


This paper from CMU’s Alessandro Acquisti and Ralph Gross is a remarkable piece of work. Starting just from publicly available records, they were able to guess SSNs with surprisingly high accuracy—up to 50%. SSNs aren’t assigned at random, so from just your birthdate and place, it only takes a few plausible inferences about the rate at which SSNs are assigned to make a good guess at your exact number.

It’s true, as Bruce Schneier says, that this isn’t a big deal since SSNs are already effectively public information. Unfortunately, much of the U.S. continues to lumber along under the shared fiction that SSNs are confidential, and sometimes even a reliable proof of identity. Acquisti and Gross’s paper ought to help demolish that fiction. They make the truth explicit, and thus harder to deny.

Announcement: The Public Index


The Public-Interest Book Search Initiative at New York Law School presents:

The Public Index
A Website to Study and Discuss the Google Book Search Settlement
http://thepublicindex.org

The groundbreaking proposed settlement in the Google Book Search case is so complex that controversy has outpaced conversation and questions have outnumbered answers. We aim to help close these gaps. The Public Index is a website featuring a collection of tools and resources for those wishing to learn about the settlement or to express opinions about it.

The centerpiece of the site is an interactive version of the proposed settlement. Users can search freely, browse by section, or read through it in a hierarchical view that retains the settlement’s indentation structure. Hyperlinks allow users to look up any defined term or cross-reference with a single click. A paragraph-by-paragraph commenting system allows them to annotate individual portions of the settlement with their own commentary. To encourage further discussion, the site also provides a full set of bulletin-board forums.

In addition, the Public Index offers a reading room of essential settlement-related documents:

  • a complete, categorized set of filings from the lawsuit
  • Google’s agreements with cooperating libraries
  • scholarly and popular essays from all points of view
  • a timeline with links to news about the lawsuits and settlement
  • links to a wide range of commentary on blogs

The Public Index also includes an open-source version of the New York Law School amicus brief to the court. The site includes a draft of the brief in a user-editable wiki; Public Index users are invited to mark it up with their corrections, criticisms, and suggestions. Changes from the Public Index will be incorporated into the brief before it is filed in September. Visitors are also encouraged to use the wiki to collaborate on their own, alternative amicus briefs.

Join the conversation at http://thepublicindex.org and stay tuned for more information about D is for Digitize!

The Restless Giant (Lawsuit)


The Google Book Search case appears to be gradually waking from its long summers’ nap. Objections and comments, which had slowed to a crawl in June and July, have started to pick up again. Of particular procedural interest is a filing from The Media Exchange Company, which has a pending patent application on a form of digital content exchange and would like to offer digital delivery options to copyright owners and book owners. It therefore asked the court to clarify that it would be allowed to file objections as a non-party. In what reads like a carefully worded (albeit handwritten) order, Judge Chin held:

Application GRANTED. TMEC may object as a non-class member and/or file an amicus brief. The court prefers one submission. This is without prejudice to any argument the parties may make that TMEC lacks standing to object. SO ORDERED.

Potential objectors and commenters take note. There’s also been a sudden spike of activity on the policy front. Three essays of note have crossed my radar.

First, the EFF launched today a privacy campaign targeted at Google, asking it to commit to reader privacy protections as part of implementing the settlement. They’ve sent a letter to Google’s Eric Schmidt laying out their concerns:

Google has put extensive resources into planning how it will dramatically expand its Google Book Search service, but seems to have made woefully little effort to articulate how it intends to adequately protect reader privacy as part of this giant project. Under its current design, Google Book Search keeps track of what books readers search for and browse, what books they read, and even what they “write” down in the margins. Given the long and troubling history of government and third party efforts to compel libraries and booksellers to turn over records about readers, it is essential that Google Books incorporate strong privacy protections in both the architecture and policies of Google Book Search. Without these, Google Books could become a one-stop shop for government and civil litigant fishing expeditions into the private lives of Americans.

As you know, Google seeks court approval to digitize and make available online millions of the country’s books, a great number of them belonging to libraries. As it does so, we urge you to assure Americans that Google will maintain the security and freedom that library patrons have long had to read and learn about anything from politics to health to science without worrying that someone is looking over their shoulder or could retrace their steps.

The letter continues with five specific forms of privacy protections the EFF and its co-authors want Google to provide. Some—such as disclosing reading data to third parties only on proper judicial process, are in keeping with how Google handled sensitive search query data when the government came asking. Others—such as the ability to “give” books to each other without tracking—fit with the privacy of offline reading practices, but could be more contentious.

In addition to the letter, EFF also appears to be gearing up to file with the court. The campaign page asks authors who care about privacy to sign up by August 15.

Second, Bernard Lang, a French computer scientist with an interest in digital copyright, has written a paper on the settlement from an international perspective, with special emphasis on orphan works. He assesses the settlement against the “three-step test” for assessing whether national exceptions and limitations on copyright are permissible under international copyright treaties. I don’t know enough to tell whether his conclusions are correct, but this is essential reading on the legal side. I encouraged him to join the public conversation, and I’m glad he’s been willing to share his analysis (albeit only in draft form).

Thus how could it be possible to rule in favor of a settlement that runs contrary to international treaties by allowing a clear copyright infringement, without the cover of a recognized exception or limitation? Such a limitation could only exist as some form of fair use in United States law, since there is no other explicit provision for it. However, the settlement is precisely intended to avoid having to decide on whether Google has been acting within the limits of fair use, regarding all works under copyright, including orphan ones.

It could be argued that a fair-use claim in the settlement differs from the original fair-use claim of the law suit. But the difference is mainly that fair- use is invoked only for orphan works, since the rights holders of other works are supposedly agreeing to Google’s exploitation. Thus a major part of the requested ruling would still concern the original fair-use issues, precisely for the very subclass that is not being represented. Furthermore, the settlement involves actual exploitation of the works rather than a simple book search service, and fair use can hardly be invoked to justify direct commercial exploitation,

Of course, the United States can introduce new exceptions or limitations, either by extending fair use through some ruling, which is actually what the GBS settlement tries to avoid, or though some new act in Congress, which is what the Orphan Works Act attempted without success so far. This failure can even be interpreted a contrario as indicating that there is currently no exception or limitation in the US legal system to deal with orphan works and that a court ruling on their status could well run afoul of international regulations.

Third, David Balto, a fellow at the Center for America Progress and a prominent antitrust attorney, has a long post at the American Constitution Society’s blog responding to my Issue Brief on the settlement. He critiques my analysis of the antitrust risks and praises the settlement:

In settling the litigation, the publishers, authors, and Google have pursued a sound and necessary approach to resolving a number of rights sharing problems that, until now, have posed seemingly insurmountable hurdles to making books digitally available. For example, by creating a nonprofit organization, the Book Rights Registry (“BRR” or “Registry”), to represent the interests of authors and publishers and to locate rightsholders who have been separated from their works, the settlement will significantly enhance the ability of subsequent entities to commence book scanning initiatives. As such, the settlement should be viewed in light of what it provides for the general public- increased access to the world’s written cultural heritage, particularly books that have long been out of print. The settlement is, in other words, output-enhancing and procompetitive. …

The universally accessible, searchable, digital library that will be realized by the Google Book Search settlement will provide unprecedented benefits to consumers worldwide. The settlement is an efficient and socially beneficial solution to the significant rights uncertainty that currently surrounds many books. Many of the leading critics of the settlement, such as James Grimmelmann, have failed to appreciate these procompetitive benefits while also dramatically overstating the antitrust risks. Like the critics of Columbus’ journey their speculation of concern is unfounded: the earth is not flat. The Google Book Settlement should unquestionably be approved.


Of course, depending on what kind of day a copper has had, there is no action, short of being physically somewhere else, that may not be construed as assault… .

—Terry Pratchett, Making Money

Jason Kottke Presents “Live” Coverage of the Moon Landing


He’s stitching together YouTube videos and synchronizing them with some JavaScript. Stunts like this one are why he’s still the Walter Cronkite of bloggers.

From the Laboratorium Archives: Don’t Buy That Kindle


As I said a year and a half ago:

Amazon’s named the device the Kindle, “to evoke the crackling ignition of knowledge,” in journalist Steven Levy’s phrase. Unfortunately, the name is more revealing than intended. The only “crackling ignition” most Kindle users will hear is the sound of their e-books going up in flames.

In related news, I now own a Kindle and I love it. It’s an amazingly convenient way to read, and I expect it to cause a substantial increase in my fiction consumption. With Amazon’s help, we’re finally figuring out how to do e-books right.

That’s why it’s all the more important to get the law and policy right, too. With dead-tree media, the bookstore can’t come to your house and steal back your books, and copyright law doesn’t put pressure on them to try. It’s time for a real law of digital property, one that’s capable of trumping intellectual property, just as first sale rights in physical property trump intellectual property.

In Praise of SCUMM


In perhaps the best gaming news of the year, LucasArts has rereleased a remake of its 1990 game The Secret of Monkey Island for PC and on the Xbox Live Arcade. The original was one of the best adventure games ever; many games tried to be funny, but only a few really pulled it off. Along Guybrush Threepwood’s hero’s journey to becoming a mighty pirate, the game skewers dozens of cliches of adventure gaming and pirate tales—while at the same time being a well-constructed adventure game and pirate tale. The game’s signature twist was probably insult sword-fighting; winning a duel was a matter of coming up with the right witty retort. Thus, if your opponent says:

You fight like a dairy farmer.

The proper reply is:

How appropriate; you fight like a cow.

But I digress. This isn’t a Hollywood-style “remake” in the style of The Day the Earth Stood Still (2008), The Manchurian Candidate (2004), or Alfie (2004), in which a formerly distinctive film is industrially cut, bent, and extruded into generic modernized fare. No, this is more like Psycho (1998): a shot-for-shot remake. The puzzles are the same. The dialogue is completely identical, albeit now voice-acted. The art is much nicer, but rarely departs from the colors and layout of the original. Indeed, the areas on each game screen where Guybrush can walk appear to be completely the same as in the original.

As if to show off the literal nature of this remake, there’s even a button you can press to toggle between the original game and the new version. I played most of the game in the new version (Dominic Armato was born to play Guybrush), but I switched over regularly to compare the graphics. You could, if you wanted, play the “remake” beginning to end using the old graphics and interface.

What this says to me is that the new team did something really profound in its simplicity. Monkey Island: Special Edition is not a new game. It is the original game. If you decompiled the bits on the disc, you wouldn’t just find that the new version does the same things as the old one. You wouldn’t even find that the game had been patched and modified a bit. No, you would find that many of the source files were exactly the same.

The reason this works is that the original Monkey Island, along with most of the classic LucasArts adventure games, was written using SCUMM: “Script Creation Utility for Maniac Mansion.” When creating Maniac Mansion, Ron Gilbert and his team faced two problems. First, they wanted to make the game run on multiple computers: Apple, Amiga, Commodore 64, and so on. Second, trying to specify how the game would respond to various commands and what it would show in response is a complex task, and hard to debug if you’re writing directly executable code.

The solution—as many game designers have realized—is to write your games not for a particular computer architecture, but for an abstract virtual machine: essentially a computer program that runs on a specific computer but simulates an abstract, easier-to-program-for computer. That way, you solve the porting problem by needing to implement only the client that simulates the virtual machine itself on each new platform, rather than rewriting the game from scratch. (Note that you also only need to implement the client once on each platform, no matter how many games will run on it.) You also (help) solve the complexity problem by designing a virtual machine that enables recurring high-level tasks of particular use for your games. Thus, for example, you could have a simple “walk to position X,Y” command, which, when executed, causes the virtual machine to display the animation of the character walking to that spot. Instead of writing executable code to animate each character walking in every possible way, you just supply some appropriate images of walking, and the client figures out which images to display, where, when, and at what scale.

The SCUMM engine was particular elegant; its assumptions about the graphical or other capabilities of the computer that it’s running on are comparatively few. Thus, Sierra games looked substantially identical for years; its toolkits imposed a standard look across generations of computers and games, but = then took a huge jump in graphical quality when Sierra upgraded to a new toolkit built from scratch, LucasArts games showed a steady improvement over time, with details of the interface changing in small, subtle ways from game to game. All they had to do was improve the SCUMM engine to be capable of taking advantage of the new hardware features that were emerging (VGA graphics! Sound cards!) and include a new, fancier set of art resources.

You see where this is going? Twenty years later, when LucasArts decided to pull Monkey Island out of the vaults, the fact that it was built using SCUMM enormously simplified the task. Porting to the Xbox 360 today is, in essence, just like porting to a computer that existed back in the early 1990s. First, write a SCUMM client that runs on the Xbox 360 and takes advantage of its HD graphics. Second, create some new high-quality art and sound resources — more colors, more pixels, live instruments, and so on. Third, take the original game files, almost as-is, and drop them into your new SCUMM client along with the new art resources. And bingo, you have a new, high-quality remake of the game. Of course it plays like the original game; it IS the original game, just running on a different computer and with better graphics and sound.

And that’s also why The Secret of Monkey Island can toggle back and forth between “old” and “new” modes, even in the middle of a line of dialogue or even as Guybrush strides from one part of the screen to another. The game doesn’t need to keep track of the state of the world separately for “old” and “new” games in order to map between them. There is only one game; the “old” and “new” graphics are just two different ways of showing the player what’s going on in it.

The other, very intriguing possibility that I take from this observation is that now that LucasArts has a fully modern SCUMM client running on Xbox 360 and PC, is that bringing other classic adventure games out of the vault is no more involved a task than making new pretty pictures and fresh sound for them. This is pure speculation on my part, I admit, but it would make complete business sense if we saw a whole set of “Special Edition” re-releases.

big scary dog is in your extended network


Music: Muttley Crue, Spaniel Ballet, Snoop Dog, Velvet Retriever, Bark Psychosis, Dogs, Fang Sinatra, Dog Kenneldys, Velvet Underhound, Woofman & the Side Effects, David BowWowWowie, Shirley Basset, Dog (Not Dog), Dog Speed You Black Emperor, BananaVimurana, Bone Roses, Iggy Pup, Houndgarden …

(Don’t bother clicking through; that’s it for the good jokes on the page.)

Antitrust and the Google Books Settlement: The Problem of Simultaneity


It’s another paper on the antitrust implications of the Google Book Search settlement, this being the second one out of Chicago. The author, Eric Fraser (a 2009 law school graduate, no less), zeroes in on the way in which the settlement simultaneously sets digital book prices across the whole market. As such, it can push the market to a different pricing position than it would be in free competition. I’d like to see a more detailed analysis of whether the new prices are stable against defection, but this is an interesting contribution to the discussion.

A Box Darkly


This 2005 paper by Michael Mateas and Nick Montfort is a beautiful deconstruction of the idea of software aesthetics. Their goal is to show that the aesthetic component of programming is not exhausted by the idea of “elegance”; they prove it by giving examples of ugly code celebrated for its obscurity, complexity, and general ass-backwardness. They start by unpacking this (comparatively tame) entry from the 1984 Obfuscated C contest, showing how it’s built up from one bizarre abuse of C syntax after another.

int i;main(){for(;i["]<i;++i){--i;}"];read('-'-'-',i+++"hell\ 
o, world!\n",'/'/'/'));}read(j,i,p){write(j/p+p,i---j,i/i);}

After some further reflections on obfuscated code (particularly how different programming languages generate different aesthetics of obfuscation), they bring in the idea of “double coding,” programs written to have multiple simultaneous meanings in different languages. The second half of the paper then turns to esoteric programming languages (they call them “weird,” which is justifiable, if not quite standard). The tour includes Intercal, Brainfuck, and Shakespeare (plus at least one more Mystery Hunt favorite before ending up at Malbolge, the all-but-unprogrammable programming language.

I could not possibly improve on this passage from their conclusion (emphasis mine):

By commenting on the nature of programming itself, weird languages point the way towards a refined understanding of the nature of everyday coding practice. In their parody aspect, weird languages comment on how different language constructions influence programming style, as well as on the history of programming language design. In their minimalist aspect, weird languages comment on the nature of computation and the vast variety of structures capable of universal computation. In their puzzle aspect, weird languages comment on the inherent cognitive difficulty of constructing effective programs. And in their structured play aspect, weird languages comment on the nature of double-coding, how it is the programs can simultaneously mean something for the machine and for human readers. All of these aspects are seen in everyday programming practice. Programmers are extremely conscious of language style, of coding idioms that not only “get the job done”, but do it in a way that is particularly appropriate for that language. Programmers actively structure the space of computation for solving specific problems, ranging from implementing sub-universal abstractions such as finite-state machines for solving problems such as string searching, up to writing interpreters and compilers for custom languages tailored to specific problem domains, such as Perl for string manipulation. All coding inevitably involves double-coding. “Good” code simultaneously specifies a mechanical process and talks about this mechanical process to a human reader. Finally, the puzzle-like nature of coding manifests not only because of the problem solving necessary to specify processes, but because code must additionally, and simultaneously, double-code, make appropriate use of language styles and idioms, and structure the space of computation. Weird languages thus tease apart phenomena present in all coding activity, phenomena that must be accounted for by any theory of code.languages comment on programming and computation.

Caribbean Holiday


Oliver Cromwell … was so delighted to hear of the capture of Jamaica that he took the rest of the day off.

—Alan Beattie, False Economy: A Surprising Economic History of the World, p. 181

An Introduction I’d Like to Read


The question of how to resolve lawsuits has long preoccupied scholars. Indeed, given the centrality of lawsuits in our legal system, it is the critical question that any theory of litigation must confront. To date, scholars have split in their answers. Some have argued that the plaintiff should win; others have argued that the defendant should prevail. Those who favor plaintiffs (the “pro-Ps” for short) have pointed to the distributive justice advantages of allocating resources to those members of society desperate enough to file suit. In reply, however, those who favor defendants (the “pro-Ds” for short”) point to the enormous administrative cost savings of a rule that would terminate all lawsuits as soon as they are filed. Both sides have produced innumerable models to justify their positions, but the debate has raged on.

This Article shows that both groups are right—and both are wrong. It introduces a new theory, called Weighted Judging, which moves beyond the false dichotomy of the pro-P/pro-D debate by integrating both their positions into a single coherent whole. Sometimes the pro-Ps are right, and the plaintiff should win. Sometimes, however, the pro-Ds are right, and the defendant should win. To resolve this tension, Weighted Judging says that courts should adopt a rule that combines the pro-P and pro-D rules: decide in favor of the party that supplies more total evidence. When the plaintiff supplies more evidence, Weighted Judging reduces to the pro-P rule. When the defendant supplies more evidence, Weighted Judging reduces to the pro-D rule. It therefore combines the best features of both pro-P and pro-D theories, while supplying courts with a simple-to-administer rule that gives both parties to a lawsuit the proper incentive to maximize the supply of evidence.

In Part I, this Article examines the history of the Pro-P/Pro-D debate. …

Silly Symbol


I was filling out an affidavit today and I came across the letters “SS” at the top, next to the lines where I was meant to enter the state and city. Was this something to do with signatures? My social security number? No. It is, I discovered, completely meaningless:

First, it is contended that the complaint was insufficient to confer jurisdiction upon the police judge to cause the arrest, because of the absence of a venue. That is, as we understand counsel, because the charging part of the document is not preceded by the words: State of Nebraska, Otoe County, ss. We think this objection is not well taken. There is no peculiar virtue in the cabalistic characters “SS,” which are presumed to have been anciently symbolical of something, but nobody knows precisely what. The complaint appears upon its face to have been sworn to before a peace officer of Otoe county, whom it explicitly informs of the commission, by the persons therein named, of an alleged criminal act within that county. This is the sole purpose of a venue, and we think it may as well be expressed in “ordinary and concise” English, as in a supposed abbreviation of long disused and perhaps not strictly correct Latin.

—Seay v. Shrader, 69 Neb. 245, 247—-48, 95 N.W. 690, 691 (1903)

In fact, though, it is a flourish deriving from the Year Books—an equivalent of the paragraph mark “¶.” … An early formbook writer incorporated it into his forms, and ever since it has been mindlessly perpetuated by one generation after another.

—Bryan A. Garner, A Dictionary of Modern Legal Usage (2d ed. 1995)

Hanoch Dagan Misses the Point


The right to exclude, he says, is not property law’s defining feature; instead, “manifestations of inclusion are just as intrinsic to property as those of exclusion, and should not be analyzed as external limitations or impositions.” There is a subtle mistake here. Inclusion—situations in which non-owners have rights to use property—is indeed important to the values that animate property. Inclusion is indeed almost everywhere in property law. But the conclusion does not follow. Property law could exist without inclusion, whereas property law without exclusion would be inconceivable.

Public Access, Privacy Law


I’ll be on public access TV later this month. Tune in to the Manhattan Neighborhood Network, channel 34, at 10:30 PM on July 28, and you’ll get to see me talk about Facebook and privacy. It’s part of the “Media Reporter” show produced out of NYLS; the interviewer is a former student of mine, Cliff Merin, who’s much more comfortable in front of a camera than I am.

If you’re not in the New York area, or (like me) don’t have cable, you can watch live on the 28th.

There Will Be Blood I give it 3 stars


This is a composite rating. I gave it four stars, Aislinn two. I thought it had compelling visuals and was grippingly original. She says it had annoying music and everything was brown. Perhaps because our disagreement was manifest after the first ten minutes, it took us two months to get around to watching the rest.

My bottom line: overrated but still good.

Her bottom line: The Jungle would have made for a better movie.