Computer Crime Law Goes to the Casino


(Cross-posted from Concurring Opinions.)

Wired’s Kevin Poulsen has a great story whose title tells it all: Use a Software Bug to Win Video Poker? That’s a Federal Hacking Case. Two alleged video-poker cheats, John Kane and Andre Nestor, are being prosecuted under the Computer Fraud and Abuse Act, 18 U.S.C. § 1030. Theirs is a hard case, and it is hard in a way that illustrates why all CFAA cases are hard.

House of Video Games

Kane found the bug in Game King video poker machines manufactured by IGT. He and Nestor used it to take casinos in Nevada and Pennsylvania for hundreds of thousands of dollars. Here’s how Poulsen describes their technique (for more details, see “Exhibit 1” here):

Kane began by selecting a game, like Triple Double Bonus Poker, and playing it at the lowest denomination the machine allows, like the $1.00 level. He kept playing, until he won a high payout, like the $820 at the Silverton.

Then he’d immediately switch to a different game variation, like straight “Draw Poker.” He’d play Draw Poker until he scored a win of any amount at all. The point of this play was to get the machine to offer a “double-up”, which lets the player put his winnings up to simple high-card-wins draw. …

At that point Kane would put more cash, or a voucher, into the machine, then exit the Draw Poker game and switch the denomination to the game maximum — $10 in the Silverton game.

Now when Kane returned to Triple Double Bonus Poker, he’d find his previous $820 win was still showing. He could press the cash-out button from this screen, and the machine would re-award the jackpot. Better yet, it would re-calculate the win at the new denomination level, giving him a hand-payout of $8,200.

They were charged under paragraph (a)(4) of the CFAA, which punishes:

Whoever—knowingly and with intent to defraud, accesses a protected computer without authorization, or exceeds authorized access, and by means of such conduct furthers the intended fraud and obtains anything of value … (emphasis added)

The sticking point in the charge is whether they “exceeded authorized access.” The government’s theory is that they exceeded their authorization by using the double-up switch to increase the payout from the first win. Their response, as summarized by Poulsen, is that they “played by the rules imposed by the machine.”

The Usual Statutes

There are, broadly speaking, two ways that a computer user could “exceed[] authorized access.” The computer’s owner could use words to define the limits of authorization, using terms of service or a cease-and-desist letter to say, “You may do this, but not that.” Or she could use code, by programming the computer to allow certain uses and prohibit others.

The conventional wisdom is that word-based restrictions are more problematic. Take the infamous Lori Drew case. She created a MySpace account for a fictional teen, “Josh Evans,” to flirt with and then cruelly reject Megan Meier, a thirteen-year-old neighbor who then committed suicide. A federal prosecutor charged Drew under the CFAA, for violating the MySpace terms of service, which prohibited providing false information in the sign-up process. Drew behaved reprehensibly, but if she was a computer criminal, then so are the millions of Americans who routinely violate terms of service. As explained by Judge Kozinski in the recent case of United States v. Nosal:

Or consider the numerous dating websites whose terms of use prohibit inaccurate or misleading information. Or eBay and Craigslist, where it’s a violation of the terms of use to post items in an inappropriate category. Under the government’s proposed interpretation of the CFAA, posting for sale an item prohibited by Craigslist’s policy, or describing yourself as “tall, dark and handsome,” when you’re actually short and homely, will earn you a handsome orange jumpsuit. (citations omitted)

The scholarly consensus is similar. The leading article is Orin Kerr’s Cybercrime’s Scope from 2003, which argues that reading the CFAA to encompass terms of service violations “grants computer network owners too much power to regulate what Internet users do, and how they do it.” In contrast, argue Kerr and many others, the CFAA should be reserved for real hacking cases: as he puts it, the “circumvention of code-based restrictions”

Unfortunately, it’s surprisingly hard to decide whether a user has gone beyond a code-based access barrier of the sort that shuld trigger the CFAA. Take this blog post in which Kerr tries to sort out authorized from unauthorized access using six hypotheticals about code-based restrictions. He argues that guessing a user’s password is “one of the paradigmatic forms of unauthorized access” but that guessing a unique URL is not, because “you can’t post stuff on the web for anyone to see and then just hope that only the right people happen to look at the right pages.”

This is a fine distinction indeed. According to Kerr, a user who views a confidential document by typing “eOH7KvedHxS3iYRa” into a text box on a webpage is a computer criminal, but a user who views a confidential document by typing by typing “?pw=eOH7KvedHxS3iYRa” into the browser’s URL bar should go free. It’s the same information, being used for the same purpose, in almost the same way.

The Australian Job

To get a sense of why these cases can be so difficult, consider an Australian case, Kennison v. Daire, in which the defendant was convicted of larceny for stealing AU$ 200:

He was the holder of an Easybank card which enabled him to use the automatic teller machine of the Savings Bank of South Australia to withdraw money from his account with that bank. … Before the date of the alleged offence, the appellant had closed his account and withdrawn the balance, but had not returned the card. On the occasion of the alleged offence, he used his card to withdraw $200 from the machine at the Adelaide branch of the bank. He was able to do so because the machine was off-line and was programmed to allow the withdrawal of up to $200 by any person who placed the card in the machine and gave the corresponding personal identification number. When off-line the machine was incapable of determining whether the card holder had any account which remained current, and if so, whether the account was in credit.

But Kennison raised a fascinating defense. He argued that the bank had “consented” to the withdrawal by programming the ATM to pay out money without checking the account balance when offline. He had a point; the bank had indeed programmed the ATM that way. It wasn’t as though he’d used a blowtorch to cut a hole in the side of the ATM, or pointed a gun at a teller.

Once you see the consent argument, you can’t unsee it. Perhaps MySpace “consented” to Lori Drew’s fake account by letting her create it. Perhaps IGT “consented” to Kane’s winning plays by programming the Game King to give him money. And so on. In any CFAA case, the defendant can argue, “You say I shouldn’t have done it, but the computer said I could!”

But Kennison lost. The High Court of Australia brushed off his consent argument:

The machine could not give the bank’s consent in fact and there is no principle of law that requires it to be treated as though it were a person with authority to decide and consent. … It would be quite unreal to infer that the bank consented to the withdrawal by a card holder whose account had been closed.

What this means, in other words, is that the “authorization” conferred by a computer program—and the limits to that “authorization”—cannot be defined solely by looking at what the program actually does. In every interesting case, the defendant will have been able to make the program do something objectionable. If a program conveys authorization whenever it lets a user do something, there would be no such thing as “exceeding authorized access.” Every use of a computer would be authorized.

Interpretive Layer Cake

Is it possible to salvage the code-based theory of authorization? Arguably, yes. We could say that Kennison knew that he no longer had an account with the bank, that we ordinarily use ATMs to withdraw money that we have previously deposited, that the ATM would have not let him withdraw the money if it had been were online, and that if a teller had been able to observe the transaction it would have been vetoed. We could call these social norms, or background facts, or context, but by whatever name, they suggest that a reasonable person in Kennison’s position would have recognized the offline withdrawal was an unauthorized exploit rather than an authorized disbursement of money.

This analysis is normatively and legally plausible. But notice what the approach requires. It requires us to ask what a person in the defendant’s position would have understood the computer’s programmers as intending to authorize. What the program does matters, not because of what it consents to, but of what it communicates about the programmer’s consent.

In other words, both word-based and code-based theories of authorization require an act of interpretation. To convict a defendant under a word-based theory, we must interpret terms of service; to convict a defendant under a code-based theory, we must interpret the code. This is not “interpretation” in the computer-science sense of running the program and seeing what happens. This is “interpretation” in the literary sense of ascribing a meaning to a text. Computer programs are texts, and in this context they convey meaning to human interpreters as well as to electronic ones.

Kennison’s case, then, involves an ambiguous text. The ATM that lets card-holders withdraw money from closed accounts when offline is susceptible to multiple meanings. It could be interpreted to authorize such withdrawals; it could be interpreted to prohibit them. The court resolved the ambiguity against Kennison, using some of the same interpretive devices it would apply to a statute or a contract. Indeed, we could say that the court quickly reached the limits of interpretation and exhausted the program’s linguistic meaning. It was forced to resort to an act of construction in determining the legal consequences of the ATM’s programming.

The same will be true in any other case of a code-based access restriction. The text—the program—will be capable of supporting at least two meanings. It can have the meaning that corresponds to its behavior: for example, paying out when the user switches games rather than doubling up. Or it can have another meaning, one that the programmer says corresponds to her true intentions: for example, not paying when the user switches games rather than doubling up. The ubiquity of bugs demonstrates that these two meanings will frequently diverge.

The distinction between “bug” and “feature” on which all of these cases turn is a social fact, not a technical one. That’s why Kerr can draw his line between text boxes and URLs. In his experience, text boxes for passwords are used to signal a level of security and confidentiality that complicated URLs are not. The distinction is plausible. It is also profoundly contingent on the habits of programmers — and it is far from clear that we should expect users to know about this line.

Three Game Kings

It should now be clear why code-based CFAA cases can be so puzzling. Consider Kane’s trick with the Game King. Were his jackpots “authorized?” In hindsight, IGT and the casinos would say “no”: IGT promptly released a patch once it realized how the double-up switch worked; the casinos installed it. But there is a difference between later regretting letting someone gamble in a particular way and prohibiting it at the time. Casinos regret letting card-counters play blackjack, but it’s only illegal if you use a device to keep count. So the casinos’ private intentions are irrelevant; what matters is what they communicated to the reasonable video poker player.

There are two different sets of rules at work on a video poker machine: the rules of the game of chance being simulated, and the rules of the software that simulates. The two must correspond: it’s illegal for a casino to deploy a machine that isn’t actually random. The best argument that Kane violated the software’s rules is that his big jackpots didn’t correspond to a legal play according to the rules of Triple Double Bonus Poker. You can’t change the stakes after you’ve won the hand but before you rake in the pot.

But wait. Triple Double Bonus Poker exists only on Game King machines; why shouldn’t it work that way? And perhaps gambling software is different than other kinds of software, since the underlying game is adversarial. In offline poker, players are expected to take full and ruthless advantage of their opponents’ mistakes. Deciding whether Kane was “authorized” to play as he did requires passing judgment not just on technical questions about how the Game King works, but also on social and normative questions about the experience of regulated gambling in America.

I don’t want to deny the possibility of reaching convincing answers in this and unauthorized-access cases. I just want to point out that by making the CFAA turn on authorization, we have committed ourselves to a messy, fact-laden inquiry, one that cannot be resolved solely by reference to the technical facts of the software in question. We have to ask how people generally use computers, and how we want them to use computers. And this messy, fact-laden inquiry is in significant tension with the goal of making easily-understood laws that draw clear lines around what is and is not allowed.

This tension may sound familiar. It is one of the problems scholars have identified with word-based restrictions on computer use. Orin Kerr has argued in his scholarship and convinced the judge in the Lori Drew case that a terms-of-service-based theory of unauthorized access is unconstitutionally vague, because it is too hard for reasonable people of ordinary prudence to learn about and obey them. But the same could be said about rules of conduct embedded in software. Code is law, unfortunately.

The irony runs deeper still. Words don’t have to be vague or ambiguous. Craiglist sent 3Taps a cease-and-desist letter telling it to stop scraping Craigslist’s listings. The resulting lack of authorization was crystal-clear. Words work for saying things; that’s why we use them.

In contrast, code is a terrible medium for communicating permission and prohibition. Software is buggy. It doesn’t explain itself. Not even programmers themselves can draw a bright line between features and bugs. If only there were some way for users to know, in so many words, what they are and aren’t allowed to do …

The Sting in the Tail

If we are concerned about terms of service liability under the CFAA, we should be even more concerned about code-based liability. The problem with the CFAA is not some recent mutation of a law that has outgrown its original purpose. The problem was there all along; it was inherent in the very project of the CFAA. The call is coming from inside the statute.

If we as a society care about online banking fraudsters, email eavesdroppers, botnet barons, and unrepentant spammers,, then we will need to continue to declare some uses of computers off-limits, on penalty of prison. But as a basis on which to do it, “without authorization or exceeding authorized access” is remarkably unhelpful.

The basic task of an anti-hacking law is to define hacking. You had one job, CFAA. One job.

(This post is based on notes I have been making towards an article on the legal interpretation of software; it was spurred by an exchange with Tim Lee on Twitter yesterday.)


Thanks for this, James.

Particularly if you are continuing to explore this issue for a future article, you might want to add to your consideration code-based “mistake” prices, which happen with all sorts of goods and services but which are especially problematic with travel services where reservations (and sometimes payment) are made long in advance, so the “mistake” is often detected before the travel services are actually provided:

http://hasbrouck.org/blog/archives/001978.html

In my capacity as a consumer advocate and travel journalist, I’d welcome your thoughts on this incident, and those like it.


Good question, Ed. These questions will tend mostly to be answered by contract law, which is not my specialty. A few thoughts, though:

One issue is whether each party knows or should have known that the other was making a mistake about some question of fact. What computers do is introduce a new mechanism by which mistakes can be made. A second issue, which you quite appropriately call attention to, is the degree to which the contract allows one party or the other to cancel, and on what terms. And a third issue, which came up in your Costanoa stay, is reliance. Even if the contract fails as a contract, you relied on the reservation in arriving there late and by bike, and a reservation should reasonably induce that kind of reliance.


James - since you use confidentiality and reliance as signals here: what do you think about this idea of the broader leverage of social expectation of confidentiality? http://www.theatlantic.com/technology/archive/2013/05/how-to-fight-revenge-porn/275759/

(And hello Ed - I was just talking about you recently in a conversation with an arbitrager about TravelFox! I’d welcome your thoughts on that on your blog.)