The Laboratorium
September 2014

This is an archive page. What you are looking at was posted sometime between 2000 and 2014. For more recent material, see the main blog at

Stanford, Google, Privacy, Money, Ethics: A Correction

I am quoted in this ProPublica article about “Stanford’s promise not to use Google money for privacy research.”

“It’s such an etiquette breach, it tells you something is really sensitive here,” said James Grimmelmann, a University of Maryland law professor who specializes in Internet law and online privacy. “It’s fairly unusual and kind of glaring to have that kind of a condition.” …

For instance, some of the non-privacy research at Stanford’s Center for Internet and Society could be more related to privacy than they appear, Grimmelmann said.

Take copyright. A study on the increasing popularity of e-books could lead to the topic of e-book piracy, which could lead to the idea of publishers requiring readers to log in, a practice that could make users’ reading habits much easier to track – a clear-cut privacy issue.

“Some of the best copyright scholarship of recent decades … couldn’t have been carried out at the Stanford CIS under the terms of the grant you described to me,” Grimmelmann said. “So a commandment that ‘Thou Shalt Not Study X’ also interferes with the study of the rest of the alphabet.”

I have now had a chance to read the legal filing in which, according to the article, “Stanford University recently declared that it will not use money from Google to fund privacy research at its Center for Internet and Society.” I stand by my statements that such a pledge would be both unusual and problematic for academic integrity. But reading the underlying “promise” in context, I now do not believe Stanford made such a pledge.

Specifically, the filing is the Stanford CIS’s application for distribution of “cy pres” funds from a class-action settlement in a privacy case against Google. It is common in such cases for a court to give settlement funds to charitable or public-interest organizations when it would be difficult, impossible, or wasteful to give money directly to class members. The CIS applied for the funding to support privacy- research on mobile privacy, privacy-enhancing technologies, analyzing state privacy laws, and educational speakers on privacy.

The crucial passage occurs in the application’s discussion of a potential conflict of interest in using Google settlement funds when the CIS already receives funding from Google.

Per Stanford University policy, all donors to the Center agree to give their funds as unrestricted gifts, for which there is no contractual agreement and no promised products, results, or deliverables. Stanford has strict guidelines for maintaining its academic autonomy and research integrity. CIS complies with all these guidelines, including the Conflicts of Commitment and Interest section of the Stanford Research Policy Handbook < handbook/conflicts-commitment-and-interest>. Stanford policies provide explicit protection against sponsors who might seek to direct research outcomes or limit the publication of research.

Since 2013, Google funding is specifically designated not be used for CIS’s privacy work. CIS’s academic independence is illustrated by the following work by Privacy Director Aleecia M. McDonald and CIS Junior Affiliate Scholar Jonathan Mayer, which may not accord with Google’s corporate interests: [list of projects]

The phrase “specifically designated not be used” is ambiguous and unfortunate. But in a blog post, CIS Civil Liberties Director Jennifer Granick states, “[T]the designation to which we were referring is an internal SLS/CIS budgeting matter, not a policy change, and we very well may decide to ask the company for a gift for privacy research in the future. But in 2013, we had other funding sources for our consumer privacy work, and so we asked for, got, and designated Google money to be used for different projects.” This sounds like a standard academic grant: a request to support specific work, which takes the form of an unrestricted gift, and which is not accompanied by a promise that the work will not touch on a particular subject.

It would have been better to use different language in the filing. It would have been better still not to have applied for the cy pres funds. But I am not convinced that the CIS made a “promise” of the particularly problematic sort I understand was at stake: a pledge to prevent funds from being used in connection with research on a specific subject. I am sorry that I commented based on a reporter’s description of the filing rather than asking to see it myself.

Facebook and OkCupid’s Experiments Were Illegal

You may remember Facebook’s experiment with emotionally manipulating its users by manipulating their News Feeds. And OkCupid’s experiment with lying to users about their compatibility with each other. And the withering criticism directed at both companies. (I maintain an archive of news and commentary related to the studies.)

At the time, my colleague Leslie Meltzer Henry and I wrote letters to the Proceedings of the National Academy of Sciences, to the federal Office for Human Research Protections, and to the Federal Trade Commission. Our letters detailed the serious ethical problems with the Facebook study: Facebook obtained neither the informed consent of participants nor approval from an IRB.

Today, we’re back, because what Facebook and OkCupid did was illegal as well as unethical. Maryland’s research ethics law makes informed consent and IRB review mandatory for all research on people, even when carried out by private companies. As we explain in a letter to Maryland Attorney General Doug Gansler Facebook and OkCupid broke Maryland law by conducting experiments on users without informed consent or IRB review. We ask Attorney General Gansler to use his powers to compel the companies to stop experimenting on users until they come into compliance with the Common Rule.

Another provision of the Maryland law requires all IRBs to make their minutes available for public inspection. In July, Leslie and I wrote to Facebook and to OkCupid requesting to see their IRBs’ minutes, as is our right under the law. Facebook responded by conceding that it conducts “research” on its users, but refused to accept that it had any obligations under the law. OkCupid never responded at all. Since complying with a request for IRB minutes would be straightforward at any institution with an IRB, the most natural interpretation is that neither of them has an IRB—an open-and-shut violation of the law.

I have written an essay on Medium—”Illegal, Immoral, and Mood-Altering“—discussing in more detail the Maryland law and Facebook and OkCupid’s badly deficient responses to it. I hope you will read it, along with our letter to Attorney General Gansler, and join us in calling on him to hold these companies to account for their unethical and illegal treatment of users.

U2 4 U

(An essay in tweets)

Say what you will about U2 4 U, it was the most Steve Jobs-ian stunt Apple has pulled in years: heartfelt, egomaniacal, and grandiose.

Personally, I wouldn’t have minded having the U2 album show up in my iTunes if it had actually, y’know, shown up.

After various glitches, i ended up with multiple copies of some songs and none of others. So not quite the best advertisement for Apple Pay.

The album itself is no worse than U2’s other recent work, but no better either.

All that said, there is something to the point that Apple crossed some kind of line by putting the album on people’s devices without consent.

U2 4 U gave users an uncanny glimpse of the power that lies behind cloud technology. It was a Goffmanian gaffe.

In this respect, it’s much like Amazon’s deleting 1984 from Kindles, as @ScottMadin observes: a reminder that someone has such power.

People’s trust in the cloud — in technology — is based on a trust that it will work predictably and at their direction.

So when Apple drops U2 on your iPhone, it shatters the illusion that your iPhone just works on its own, which is deeply unsettling.

Apple of all companies — having invested so much in convincing users that their devices Just Work — should have been alert to the dangers.

It was basically harmless — but in the same way that skidding and then regaining control of your car is “harmless.” You’re still rattled.

The general principle here isn’t quite consent, because to talk about when your consent is needed, we need to know what counts as “yours.”

To take a first cut at it: devices are yours, so are things you paid for access to, and things you make, and collections you curate.

Apple’s flub was the same as Amazon’s with 1984, and Twitter’s with tweet injection. This is what’s wrong with malware, and Yahoo shutdowns.