This is an archive page. What you are looking at was posted sometime between 2000 and 2014. For more recent material, see the main blog at http://laboratorium.net
Today’s computer science blooper comes from Perfect 10 v. Google, Inc., 416 F. Supp. 828, 847 n.13 (C.D. Cal. 2006):
For example, a typical full-size image might be 1024 pixels wide by 768 pixels high, for a total of 786,432 pixels worth of data. A typical thumbnail might be 150 pixels wide by 112 pixels high, for a total of only 16,800 pixels. This represents an information loss of 97.9% between the full-size image and the thumbnail.
The court here seems to believe that if a full-size image has N times as many pixels as a thumbnail, then the full-size image necessarily has N times as much information. False. Consider an all-black image (Casimir Malevich as digital artist, perhaps). From a 1000x1000 all-black image, we can make a 100x100 all-black thumbnail. If we then blow up the thumnail by a factor of 10, we get back not a blurred approximation of the original image, but the original image itself!
In general, making a thumbnail doesn’t lose as much information as a raw pixel count would suggest, because most of the images of which we might make thumbnails have subtantial regularities. Some of the fine detail may be lost, but it’s not as though every last pixel is equally important. Unless the thumbnail is particularly small, it’s likely to capture much of the essence of the original. Calculating the exact degree of information loss depends on the encoding and the particular image involved, but dividing pixel counts is qute likely to vastly overestimate the loss.
This was actually part of Perfect 10’s point in the litigation—that the thumbnail was a reasonable substitute for the original because it captured some of the original’s gestalt. This is also the reason that Google offers thumbnails. If they really were only 2% as information-rich as the original, you can bet that Google Image Search would be painful to use.
Today is a dark, dark day.
Peter S. Jenkins, Historical Simulations - Motivational, Ethical and Legal Issues, 11 J. Futures Stud. 23, Aug. 2006.:
A future society will very likely have the technological ability and the motivation to create large numbers of completely realistic historical simulations and be able to overcome any ethical and legal obstacles to doing so. It is thus highly probable that we are a form of artificial intelligence inhabiting one of these simulations. To avoid stacking (i.e. simulations within simulations), the termination of these simulations is likely to be the point in history when the technology to create them first became widely available, (estimated to be 2050). Long range planning beyond this date would therefore be futile.
This is a nearly perfect abstract. My only complaint is that it does not accurately reflect what is actually in the paper.
I may be on KUOW’s The Works tomorrow night from 8 to 9 PM PDT. I just finished taping a ten-minute interview on Creative Commons licensing, with some discussion of copyrights and Zunes. I sound ike a blithering idiot in places, and I know I’ll hate the way my voice sounds on the air, but it was fun to do. John Moe, the host, was quite nice, as was the producer, and the whole experience was a good endorsement for public radio.
UPDATE: Ah, well. The podcast appears to be up, and I don’t appear to be on it. It’s just as well, both for me and for the listening public.
From page 94:
If Homer had not lived, eventually someone else would have written a poem about revenge, gods, and a war over a beautiful woman. Yet once the Iliad is in existence, it becomes hard to determine whether subsequent authors of works on these themes are copying the Iliad or copying life.
Lots of good stuff in here, by the way. Their discussion of the different sorts of uncopyrightable ideas in fiction and nonfiction (from which the above is taken) has cleared up a number of issues that had been troubling me, and I find myself flagging a passage for later reference every twenty pages or so.
I am exactly 38 pages into the Landes-Posner Economic Structure of Intellectual Property Law and I have just read something that makes me question whether some of their other reasoning might be similarly shaky.
They’re discussing a feature of some copyright systems under which an author has a right to reclaim the copyright after 35 years (in the American system) or to claim a royalty on a later resale of their work (in some other systems). Their argument is that “giving” authors these inalienable rights is actually bad for authors. In exchange for a right that may only vest 35 or more in the future, the author is forced to give up some money now. For a literally starving artist, food money on hand now would beat royalties 35 years from now. With this I generally agree. But I think they push the point too far when they write:
Economic analysis suggests, contrary to intuition, tat these laws reduce the incentive to create intellectual property by preventing the author or artist from shifting risk to the publisher or dealer. He is prevented because he cannot contract away his right of reclamation. A publisher who must share any future speculative gains with the author will pay him less for the work, so the risky component of the author’s expected remuneration will increase relative to the certain component. If risk-averse, the author will be worse off as a result.
So far, so good. (This line of analysis is why economic defenses of reclamation rights tend to emphasize that what authors loses in bargaining freedom now they will gain by receiving rights in the future at a time when they have more power to demand a better deal. The defense must concede that the rule of inalienability is a restriction on the author’s present freedom.) But then they push on:
And if he dies before the event that vests his right (the passage of thirty-five years, in the case of the recapture right, or the resale of his work, in the case of droit de suite), he will have received no part of the value of the copyright that survived that event, since he was not permitted to sell that value.
A bridge too far for the Dr. Evil and the Mini-Me of the economic analysis of law. The chance that the author might die before exercising the recapture right increases the value of that right in the hands of a potential purchaser. In the negotiations over the sale of the original copyright, the author should be able to demand slightly more from the publisher in exchange for this possibility. Thus, authors in general receive slightly more as a result of the possibility of death before recapture. Some authors—those who live to exercise the recapture right—receive both this “slightly more” and the recapture right itself. Other authors—receive only the “slightly more.” They didn’t receive much of the value of value of the copyright surviving the recapture event, but they were compensated up from for part of the value.
Landes and Posner’s own reasoning from the first part of the analysis shows why their last sentence is untrue. Their point is that making the recapture right inalienable deprives authors some some value. If death, however, bars the recapture right, then an author is in effect permitted to alienate the posthumous recapture right. By their reasoning, giving back that (partial) alienability should benefit authors. Trying to make that possibility into a further screwing over of the author is like trying to drink out of the second end of the straw.
Now, the analysis is slightly more complicated if the recapture right is devisable or otherwise does not vanish at death. (The United States renewal term, for example, passes under an unalterable quasi-intestacy regime to the author’s widow(er), children, and next of kin.) In this case, Landes and Posner are technically correct that the author “receive[s] no part of the value of the copyright that survive[s] that event,” since he was not compensated up front by the publisher for the recapture right and is dead by the time it is exercised. But in this case, the author leaves (or is forced to leave) the recapture right to his family, and so can count its value among the assets he gives to them. He might therefore reduce his voluntary devises to them by its expected net present value, leaving him more wealth to devise as he pleases or to spend during his lifetime. The reasoning is different, but the end result is similar.
The problem with inalienable recapture rights is that they are inalienable. Given that fact, introducing the possibility that the author might die before exercising them cannot hurt the economic interests of the author. It either makes things slightly better for authors (the first case above) or it has no net effect (the second).
Have I missed something here? Working through the reasoning has been enough to convince me. Then again, working through their reasoning seems to have been enough to convince Landes and Posner, so what does that really prove?
Microsoft announced this week its forthcoming iPod knockoff, the Zune. (The name alone may be enough to sink it in the marketplace.) The advertising pitch seems to be that your Zune will explode and set you on fire. The single big innovation seems to be wireless sharing: if you and another Zune user are nearby in meatspace, you can send them a music file, which will then play on their Zune. It’s like iPodjacking, but without wires, or like toothing, except that it involves music. (Like toothing, it has yet to be shown to exist.)
There’s a catch, though. (There’s always a catch with Microsoft.) Your Zune-enabled friend of convenience can only listen to the file three times, and must do so within three days. After that, the DRM in which Microsoft wraps everything that goes on a Zune will kick in and disable access. (Presumably, this is the price that Microsoft had to pay to get the music industry to go along with wireless sharing.) The Zunes involved will do this regardless of whether the song is copyrighted or whether it’s free for redistribution under, say, a Creative Commons license. In the words of one insider:
There currently isn’t a way to sniff out what you are sending, so we wrap it all up in DRM. We can’t tell if you are sending a song from a known band or your own home recording so we default to the safety of encoding.
This policy has, quite predictably, inspired some outrage. See the comments to that post, and also Medialoper and also BoingBoing. The common thread is that the Zune will “violate Creative Commons licenses,” which contain an anti-DRM clause:
You may not distribute, publicly display, publicly perform, or publicly digitally perform the Work with any technological measures that control access or use of the Work in a manner inconsistent with the terms of this License Agreement.
Let’s see how much meaning we can unpack from these various claims.
FIRST, Cory and others have pointed out that a Creative Commons license can be embedded in an MP3, so that our insider is wrong to say that the Zune can’t “sniff out what you are sending.” What he should have said is that while your Zune can sniff out licenses and determine the apparent licensing status of your music, your Zune can’t sniff out whether you are lying to it. If you have a whole pile of MP3s ripped from CDs or downloaded off the Internets, it’s easy to use CC’s own tools to embed wholly fraudulent CC licenses in them. CC’s descriptions of the process are quite open that seeing a CC license in a file doesn’t guarantee that it really is; they have a clever protocol involving “linkbacks” to increase your certainty, but there’s no substitute for actual investigation. The Zune is just a portable MP3 player and can’t by itself carry out that kind of careful inspection.
SECOND, one might think that given this possibility of deception, Microsoft needed to use DRM to cripple music transfer, less it have its pants sued off by the music industry. That was my reaction, too—what else would stop people from giving their friends complete copies of their entire collections, falsely labeled as CC-licensed. Then I realized that no, there are other forms of speed bumps besides DRM. How about a 25-song limit on streams to a particular Zune per day, and manual selection of a song at a time on the sending Zune? Other forms of technical restrictions are both complements to and substitutes for DRM. It seems possible to design technologies that are inconvenient for wholesale copying but still allow anyone who wishes to work with any particular piece of media they want.
I wouldn’t have advised Microsoft to stand and fight for unrestricted transfers between Zunes of files claiming to be legit. In today’s copyright climate, that’s flirting with disaster. But I would have encouraged their designers to come up with other ways of making Zunes unattractive choices for large-scale file-sharing that didn’t involve DRMing files that started out life un-DRMed ad claim to be free for reuse. I think that there may be designs that would leave MS on fairly safe contributory infringement grounds without inflicting DRM on everything under the sun.
THIRD, It’s not clear to me that this design decision actually causes legal trouble for anyone. First, Microsoft is not, presumably, loading up these devices with CC-licensed media and streaming the files around. Thus, Microsoft hasn’t even passed the basic threshold for violating a license: having been a licensee in the first place. If anyone is violating the licenses here, it’s the users loading up CC files on Zunes and them sending them to friends along with some tasty DRM.
Trouble is, I’m not sure that a CC licensor has a case against users who do just that. The process of placing a file on a Zune is not “ditribut[ing], publicly display[ing], publicly perform[ing], or publicly digitally distribut[ing] the Work,” so it is explicitly allowed by the license. (It’s also a fair use.) That leaves the act of sending it to a Zune-playing friend. In almost all cases, that’s a private, non-commercial copy that cannot substitute for any market for the original. In other words, we are in one of the heartlands of traditional fair use. For the same reason that users don’t need the permission of the RIAA to allow these restricted Zune-to-Zune transfers, they don’t need the permission of Creative Commons licensors for them. The use is fair.
Can you imagine a court holding that an end user with a CC-licensed song and a Zune is a copyright infringer because she allowed a friend to listen to the song Zune-to-Zune? I didn’t think so. That’s what it means to say that the use violates the license—the user becomes an infringer. I can see that perhaps someone using Zunes to rent music for a few days, or for some commercial process where they keep the music on a tether, might well not be within the terms of the license and could be subject to suit to force them to open up. But Joe and Jane Zune? Given that unlikeliness, I don’t think that anyone could go after Microsoft as a contributory infringer. Once again, if the RIAA couldn’t’ (and I don’t think they could), I don’t see how a CC licensor could, either.
FOURTH, this scenario raises a long-standing and important question: How do Creative Commons licenses interact with fair use? Speaking roughly, on the one hand, the use of a CC license signals that the licensor has an attitude of openness and sharing and has signaled that she does not regard each and every of her exclusive rights as essential to her economic advantage. Therefore, the license should be liberally construed and fair use treated broadly to effectuate her purpose of openness and avoid hidden pitfalls. On the other hand, perhaps by choosing a CC license, the licensor has made a bargain that this is as far as she goes, but no further. On this view, fair use should be narrowed, because the licensor has already been quite generous in other ways.
The tension here is that CC licenses are neither an abdication of all rights nor a greedy enforcement of every last one. They are a sensible middle way. A set of copyright doctrines that pushed too far towards openness or towards exclusivity when there is a CC license in the picture could result in undermining the purposes of the license. How this will and should play out in practice remains an indeterminate question. Some good scholarship on the interpretation of CC licenses is starting to appear, but there is a lot of work to be one. (Particularly notable: Lydia Pallas Loren’s Building a Reliable Semicommons of Creative Works.) I would note, also, that Creative Commons does not offer a “private use only” option on its licenses, perhaps because such an option would cut too close to a “fair use only” restriction. They’ve been quite careful about trying not to restrict the scope of individual fair use rights by accident.
FIFTH, the DRM clause itself is a problematic one for Creative Commons. The idea behind it is clear enough. DRM can eliminate the practical usefulness of a CC license—yes, you may be licensed to redistribute the work and make changes, too bad the DRM won’t let you. DRM is also philosophically troubling for many people who firmly believe in the Creative Commons philosophy of respectful and voluntary sharing.
But actually including a DRM clause causes some issues. First, and perhaps least essentially, I think the drafting of the actual current DRM clause was a disaster. A decade of DMCA caselaw has given us something of a sense of what to expect from the terms of art in “technological measures that control access or use of the Work.” But what on earth does “in a manner inconsistent with the terms of this License Agreement” mean? That language is ambiguous on its face; it could mean either that the DRM forces a downstream user actually to violate the license, or that the DRM gives the downstream user fewer freedoms than the license itself would give. Both readings cause problems in practice; the former can be too slow to kick in, the latter too quick. “Inconsistent” is the troublesome word. It’s easy to describe a legal restriction as being “inconsistent” with a license; it’s easy to describe a particular use as being “inconsistent” with a license. But in what sense is a technical restriction “inconsistent” with a license? That should have been spelled out or rewritten.
Second, a DRM clause is more urgently needed for some licenses than for others. As long as we’re just talking about the original file, that one person has locked it up with DRM is irrelevant if it’s available quite easily from other sources. Only if one DRM-loving party has become dominant enough that its their way or Copyright’s Highway does the DRM become a serious issue. With derivative works under the ShareAlike versions of CC licenses, on the other hand, the DRM clause becomes regularly significant. The fear here is “appropriation”—someone will create a derivative work and then lock that work up with DRM, so that anyone at all who wants the new and improved version has effectively lost the benefit of the CC license. The drafters of the GPL, whose copyleft properties inspired the ShareAlike license option, have been attempting to insert an anti-DRM clause for exactly that reason.
But there is a tough tough question lurking here. Anti-DRM clauses are in one significant sense quite antithetical to the purposes of copyleft and other licenses that aspire to be Free (see also). They create a serious restriction on the rights of individuals to make use of the content in all sorts of ways. For this very reason, the current Creative Commons licenses are considered by many Debian contributors not to be Free.
Creative Commons is indeed redrafting the anti-DRM clause to comply with their concerns. (For reasons with which I strongly disagree, Creative Commons’s international affiliates have put a hold on this change.) The proposed revision would add a “parallel distribution” clause. En-DRMed uses would not violate the license if the offeror of the DRM-ed version also offered an un-DRMed version in a reasonably accessible fashion. I would note that this revision would not, by itself, save the Zune users (a fact that suggests to me that the redraft requires further redrafting), but it would validate other “inadvertent” uses of DRM in which the user was properly conscientious about allowing others to make use of their full CC-granted rights.
In summary, then, the problems posed by the Zune are both deeper and subtler than headlines “Zune Violates Creative Commons License” would suggest. For those interested in the issues, Seth Schoen’s GPL v3 and trusted computing is essential reading.
Lest everything here seem to be frivolity and light, the next few days are critical call-your-senators time. The moral future of this country hangs in the balance. President Bush has been pushing strongly to convince Congress to pass a bill that would authorize waterboarding, induced hypothermia, prolonged physical stress combined with sleep deprivation, and other forms of torture. The administration prefers to call them “alternative procedures,” a euphemism that will soon, like “ethnic cleansing,” become a common term of opprobrium for the practice it sought to whitewash. But torture they are.
Moral superiority is civilization’s best form of resistance to terrorism, but it only works if we are morally upright, not if we merely claim to be. If the president’s bill becomes law, we will not be men but monsters.
Senator McCain has offered an alternative bill that is only marginally better. It would purport to outlaw such techiques and to regularize the system of military commissions into something slightly less of a kangaroo court. But, like the president’s bill, it would attempt to end any practical hope of judicial review for detainees. Judicial review brought us the possibility of taking a moral stand; it has allowed those tortured and convicted without rational process to ask that the United States obey its own laws. Without judicial review, we have only the good faith of the executive to assure us that torture is not the routine practice of the United States government. That same good faith brought us secret CIA prisons and Abu Ghraib.
In short, the McCain “alternative” will be, in this administration’s hands, just as much of an empty promise as the administration’s own bill. The hypocrisy will be one step further removed from public consciousness, but it will remain.
The House has approved the administration’s bill, without apparent thought or qualm. Only the Senate remains between the United States and moral disaster. Call your senators and ask them to do all they can to oppose both the administration bill and its putative “alternative.”
I found the following tidbit buried in a news story about the HP pretexting scandal:
Hewlett-Packard Chairwoman Patricia Dunn took the fall Tuesday after admitting she authorized an investigation that relied on “inappropriate techniques” to uncover who was leaking boardroom secrets to the media.
CEO Mark Hurd, who has the respect of Wall Street and is untainted by the investigation at the Palo Alto-based computer and printer maker, will take over, vowing that the probe’s methods “have no place in HP.” HP’s stock rose to a 52-week high.
It makes perfect sense that a company’s stock might partially rebound from a scandal-induced loss if the comany takes firm steps to deal with the wrongdoers, recognizes the wrongdoing, and clearly signals that it will be conforming to a higher standard of behavior from now on. But why would Wall Street value HP more highly than it did before it even knew about the scandal? Some possibilities:
The Street regarded Dunn as ineffectual and is now celebrating her ouster, whatever the reasons.
There’s no such thing as bad publicity; once the specific problems have been dealt with, being in the news is in itself a good thing for HP.
Wall Street already knew about the pretexting before the news broke publicly, and so had already factored the bad news into the stock price. Only the good half of the news was a surprise.
Something else independently happened to make HP more valuable.
Herding behavior is overwhelming rational responses.
Or perhaps, the best answer is “none of the above.” A quick glance at the stock chart for HPQ shows that that “52-week high” was followed by a fifty-cent dropoff in the afternoon; when trading opened the next morning, HPQ was trading almost a full dollar lower.
When I arrived for the start of the semester, I discovered that while the law school was university was expecting me, I didn’t exist in most of the relevant computer systems. The next few days proved to be a remarkable exercise in the old run-around. As best I can remember, they went something like this …
Registrar: Tells me to start with the business office.
Business office: Tells me I need to wait on central IT to put me “into the system.” ID card and library privileges will have to wait on that bottleneck. Tells me to go to the institute secretary for keytag and carrel assignment. Tells me to try law school IT for email and network access.
Secretary: Tells me to go to the institute director for keytag and carrel.
Institute director: Tells me to talk to the library for carrel. As for keytag, walks with me over to …
Building facilities: Gives me keytag. We go back to the institute office, where my keytag doesn’t work. So we tromp back to …
Building facilities: Gives me replacement keytag. This one works! Now let me see about the email.
Law school IT: Has no record of me, says I have no network ID. Says I’ll need one from Central IT. Says central IT is implacable, works on its own pace, and cannot be rushed.
Business office: Agrees, says that central IT is the bottleneck. Dispirited, I then run into …
Roommate working at computer help desk: Suggests I talk to director of law school IT, rather than just person at front desk.
Director of law school IT: Agrees that Central IT is the bottleneck, but knows who to call. Tells me that Central IT will be able to get back to me by the next day. In the meantime …
Business office: Gives me handwritten note on back of business card authorizing me to enter library. With this in hand, I go to …
Library administration: Has no record of me and therefore cannot assign me carrel.
Institute director: Says that library should totally have record of me and should give me a carrel. He calls …
Associate dean: Says that library should totally have record of me and should give me a carrel.
That’s as far as I got on Day 1.
Central IT: Emails me (at my non-university email address) that my network ID is ready. Armed with network ID, I am quickly able to activate email and network access. Score! Time to try again with the ID card. How about, oh, I don’t know, …
Business office: Directs me to university-wide ID office.
ID office: Says I need an authorization form from law school registrar.
Registrar: Directs me to business office.
Business office: Says that ID office is incorrect, that no form is needed. Calls ID office to explain said incorrectness.
ID office: Issues me ID card. But that’s just university-wide. I still need to take care of one more detail:
Business office: Issues me a current-registration sticker confirming that I’m affiliated with the law school. Thanks to the ID card, I can now get into the library under my own steam. Time to take care of some outstanding business there. First, let’s try to check out some books:
Checkout machine: Scans ID card. Informs me that there is an issue with my card and I should go to the circulation desk.
Circulation desk: Says I need to return during business hours, when desk is staffed by librarians, rather than work-study students.
And that was as much as I could take care of on Day 2.
Circulation desk: Tries to fix borrowing privileges. Cannot. Summons senior librarian, who fixes borrowing privileges.
Checkout machine: Scans ID card; allows me to check out books.
Library administration: Has record of me, tells me to wait until middle of month for carrel assignment, once student carrels are taken care of.
Associate dean (several hours later): Says that library should have given me carrel and that he has already spoken to them.
Library administration: Assigns me carrel.
Online library system: Will not let me request books from other university libraries. Informs me that there is a hold on my account. Sends me to …
Main library privileges office: Inspects my account, removes hold.
And that has been my odyssey through the university bureaucracy. Throughout it, every single person I dealt with was unfailingly polite and genuinely trying to be helpful. We were just all caught up in the bureaucratic madness. I’m hoping that getting my paychecks won’t require similar exercises in orienteering through the underworld.
Assuring a computer’s software configuration is also a notoriously difficult problem, and research has focused on mechanisms to ensure that only approved code can boot or that a machine can prove to a remote observer that it is running certain code. For example, commercial systems such as Microsoft’s Xbox game console have incorporated mechanisms to try to resist modification of the boot code or operating system, but they have not been entirely successful. Although mechanisms of this type are imperfect and remain subjects of active research, they seem appropriate for voting machines because they offer some level of assurance against malicious code injection. It is somewhat discouraging to see voting machine designers spend much less effort on this issue than game console designers.
—Ariel J. Feldman, J. Alex Halderman, and Edward W. Felten, Security Analysis of the Diebold AccuVote-TS Voting Machine
Note: The exact location of the house will only be revealed to serious, pre-screened, and financially pre-qualified peospective buyers at an appropriate time. The owner believes that keeping the exact location secret to the general public is an important part of the home’s security.
Suppose that Alice learns the location of the house. Suppose further that Alice posts a blog entry with a link to a satellite photo showing the house. Discuss Alice’s potential tort liability, if any, to the owner of the house. Does Streisand v. Adelman apply? Does your answer depend on the manner in which Alice learned of the location of the house? Would the answer change if Alice used a different means of revealing the location (e.g., by tax lot number)? Does the answer depend on whether the security of the home actually depends on its being a secure undisclosed location, or merely on whether potential buyers think that having an undisclosed location makes it more secure?
My take, for what it is worth, is that the owner may well have had some protectable priacy interests in details of the security systems, such as the thickness of the walls, the cameras, the redundant filtration systems, the safe room, the grouning rods, and so on. But revealing all of those details, together with pictures of the house, and then asking the law to keep secret the one remaining detail that would allow other to link up those details with an actual house seems like it would be pushing one’s luck. Put another way, fully recouping one’s investment in security systems and keeping those security systems secret are not always easily reconciled goals.
I’ve been reading John Markoff’s What the Dormouse Said. The subtitle is “How the 60s Counterculture Shaped the Personal Computer” and it’s about the computer communities at and near Stanford that pushed relentlessly on the interface innovations and use models that came to dominate personal computing. It’s quite interesting.
Let me say up front that I don’t actually believe Markoff’s thesis, insofar as he asserts that these researchers (principally Doug Engelbart’s SRI group that developed the mouse) and hobbyists (principally the Homebrew Computer Club (warning! sub-par Wikipedia entry, but the links are good and collected in one place)) actually drove the invention of the “personal computer.” As Paul Ceruzzi’s somewhat drier A History of Modern Computing argues, the first “personal” computers were also the products of the frontiers opened by minicomputer revolution. Markoff, by and large, conflates interface innovations (he’s big into the Engelbart-Xerox PARC-Apple story, and largely ignores the role of the command line in popular computing) with the power of having a computer all to yourself.
That gripe aside, it’s fascinating. In part, I enjoy any well-told yarns about the Elder Gods of computing. The sixties and seventies gave us some incredible advances in computing and it makes my heart sing to read of how it was done.
In greater part, though, I’m enjoying discovering just how deeply wierd that era in that part of California was. We’re not talking about the usual media images of hippies at music festivals; we’re talking about the deep interpenetration the straightlaced and the psychedelic. We’re talking engineers taking LSD together at work in hopes of achieving groundbreaking technical insights. We’re talking Doug Engelbart attempting to make his research group run more effectively with encounter groups. We’re talking Stewart Brand attempting—and nearly failing—to give away $20,000 at a party.
I’m not sure whether it changes my views of modern computing to know that some of its pioneers thought computers would augment human consciousness in exactly the same way that LSD, meditation, free love, and wholly self-directed education would. But I’m enjoying finding out how they did.
I’m a sucker for candy innovations, no matter how slight. I can be persuaded to try almost any candy once. Today, it was Garfield’s Chocobites, brought to you by the American branch of the Argentine confectioner Arcor.
They must have spent all of their budget on the celebrity endorsement (“Garfield Approved Limited Edition”), because they don’t seem to have bothered to do any product design. I have never seen a more blatant M&M imitation. They’re marginally thinner; the colors include one hideous non-M&M pinkish-brown; the candy shell is more fragile, the mouth feel is off—but otherwise, they’re M&Ms.
Which got me wondering—would Mars have a case? Most of the things that struck me as M&M-like are functional and hence unprotectable. The function of candy is to taste good. Candy shells around chocolate might even be doubly unprotectable as generic. The fringe details—getting the flavor of the chocolate exactly the same—while more specific, still have such strongly functional aspects that it’s hard to see any room for protectable trade dress.
The colors strike me as Mars’s best fact. The electric blue and disquieting red were spot-on (at least under the poor lighting in the shuttle where I snacked on them). Color can be a trademark now, provided it has acquired secondary meaning (see Qualitex. Despite Mars’s attempts to undermine the characteristically distinct M&M colors with their endless awful mucking about with the product design (Giant green Shrek M&Ms! Metallic Star Wars M&Ms! Black-and-white M&Ms!), I’d argue that if there’s strong secondary meaning in a color anywhere, it’d be in the M&M trade dress. (How many forms of trade dress are the subject of urban legends?) The size and shape are a closer matter; they’re not identical, but they’re less different than on other M&M-knockoffs I’ve seen.
Candy trade dress litigation tends to focus more on the design of product packaging. Whetstone Candy Co. v. Kraft Foods, Inc., 351 F.3d 1067 (11th Cir. 2003) involved (along with some less candy-themed issues) the product design of two competing chocolate orange products. Wrigley and Cadbury have crossed chewing gum sticks over the design of Trident and Dentyne packaging. And the candy shell has even been on the other foot; in Hershey Foods Corp. v. Mars, Inc., 998 F. Supp. 500 (M.D. Pa. 1998), Hershey claimed that peanut butter M&Ms were using the same orange package color associated with Reese’s Pieces.
(Okay, I know you’re dying to know what happened. Held: the color orange is not functional in signaling the presence of peanut butter inside. Held: the design of Reese’s Piece packaging is not famous. (!) Held: the M&M design did not dilute the Reese’s trade dress. The full opinion involves one six-factor test and one eight-factor test. It’s denser than a bag of molten chocolate candy.)
Though rarer, there have been some candy trade dress suits based on the actual design of the candies. Malaco Leaf, AB v. Promotion in Motion, 287 F. Supp. 2d. 355 (S.D.N.Y. 2003) involved the fish-shaped design of two competing forms of Swedish fish. To no one’s surprise, red and fish-shaped has become the generic design for, well, Swedish fish. As a trademark, SWEDISH FISH is also weak. Not because they’re actually fish (-shaped candies) from Sweden, but because “Swedish fish” primarily describes the candies we all know and love. Ah, language.
See alsos include Topps Co. v. Gerrit J. Verburg Co., 41 U.S.P.Q.2D (BNA) 1412 (S.D.N.Y. 1996) (Ring Pops not functional) and Nabisco Brands, Inc., 772 F. Supp. 1287 (M.D.N.C. 1989) (Life Savers shape a strong trademark). I have also heard tell of a lawsuit involving the “size, shape, coloring and speckling” of jelly beans. So there is plenty of precedent for a lawsuit based on candy design.
And now for the punch line. Mars has already once sued Arcor and won an injunction for infringements on the trade dress of M&Ms. See Masterfoods USA v. Arcor USA, Inc., 230 F. Supp. 2d 302 (W.D.N.Y. 2002). There, Arcor introduced a candy called “Rocklets” in packaging that the district court found mighty similar to the M&M packaging.
The Court finds that the layout, colors and sizes of the packages are similar to one another, except, of course, for the m&m’s(R) on Mars’ and the ROCKLETS and Arcor names on Arcor’s. The use of the slanted name in contrasting colors, the lozenge, the appearance of shiny multi-colored lentils and the brown color of the plain, and yellow color of the peanut, packages, are similar upon first glance.
The design of the competing candies themselves—the factor on which I think Arcor may be in the most trouble—was of course replicated in the design of the bag in which they came. Arcor has clearly taken the lesson to heart; Garfield’s Chocobites come in a black bag, and the candies themselves are highly stylized, looking more like abstract discs of color than like candies.
Which, of course, is why I bought them—I thought they’d be something new and different, not just another M&M knockoff. The bag made them look like something else than what they were.
Talk about your consumer confusion.
As part of my research for one tiny corner of my work in progress on search engines, I’ve been reading deep linking cases and commentary this afternoon. Per Wikipedia, Deep linking is “the act of placing on a Web page a hyperlink that points to a specific page or image within another website, as opposed to that website’s main or home page.” Search engines care about deep linking because if content providers have a right to prohibit it, they can use that right, effectively, to prevent search engines from listing their content. Site owners would like the control over users and branding that a deep linking right would give them; open access advocates claim that that right goes against the open linking culture of the Internet and would stifle competition and innovation. It all seems like significant stuff.
I’m finding, interestingly, that the American legal world has more or less forgotten about deep linking. After a few high-profile cases in the late 1990s and early 2000s, and a veritable eruption of scholarship, deep linking issues have basically dropped off the radar screen. There have been some student notes on the issue in the last two-three years, but otherwise, few people are making a stink one way or the other.
This is interesting, first of all, in that the issue is very much not dead elsewhere in the world. Deep linking cases have been actively litigated in Denmark and India within the past year, with the respective courts reaching opposite results. Given that the questions involved have never really been definitively resolved in this country, the relative American silence is not for lack of legal opportunity.
I also find it interesting in that the relative age of the scholarship here is telling, when you look around at what people are actually doing about deep linking. On the one hand, there are entire business models being built around web applications for which deep linking would be devastating. To take one example, chosen more or less at random, consider SendSpace, which allows you to upload files and send the links to friends. They make their money showing advertising. That means going through a full page, not a page artfully stripped to have just the link to the file. Four fifths of the existing deep linking scholarship thinks in terms of content providers protecting their content from direct competitors or their brand, from dilution, not quite in terms of providers making sure that user eyeballs are properly monetized.
The range of anti-deep linking technologies available today is also considerably more extensive than formerly. After an age of simplification, during which designers abandoned frames and opted for simpler designs with more standardized layouts, we’re going back into page designs that have non-trivial state and which can’t easily be wrapped up in a single URL. Dynamic, AJAXy pages are by nature resistant to deep linking. You can’t easily create a deep link to anything that doesn’t have a permanent link in the first place.
Moreover, the techies have plenty of fairly simple ways to nullify deep links. A basic paywall—only registered users beyond this point—forces users to go through your processes. Even if the crawlers can index your content, when everyone has to register to see what you have, you control the horizontal and the vertical. (Well, modulo BugMeNot, that is, though I wonder how long it will last.)
Or, in one of my favorite tricks, site owners can simply check referrers. Pages and images should only be loaded from particular other pages on your own site. You can either block requests that don’t go through proper channels, display a snide message instead, or show something that the user really didn’t want to see. This guy did it with style. When Fuddruckers’ deep linked to a Burger Time clone flash game he’d made, he redirected the traffic to a page with pictures of a slaughterhouse. (For fun, ask yourself about possible theories liability here, being sure not to overlook the relevance of the probably unauthorized clone of Burger Time.)
The glib answer, then, would be that site owners don’t need deep linking liability because they have other effective routes to the same end (both legal and technical) and that scholars have sensibly ignored a now-irrelevant legal issue. This may or may not be the case, but I think it has, at the least, taken some urgency out of the deep linking wars.
This is all by way of an aside. I don’t at the moment have a good hook on which to hang these thoughts in my search engines piece. I just thought I’d get out in the open something that I found a bit unusual in looking through the case law.
In Fitzgerald, plaintiff claimed published statements about his purported sale of top secret marine mammal weapons systems to other countries was libelous.
From Al-Haramain Islamic Foundation, Inc. v. Bush, No. 06-271-KI (D. Or. Sept. 7, 2006). “Fitzgerald” refers to Fitzgerald v. Penthouse, 776 F.2d 1236 (4th Cir. 1985), and yes, it does discuss dolphins with frickin sensors attached to their heads.
I had never expected to see the words “top secret marine mammal weapons systems” and “libelous” in the same sentence.
I have a slightly idiosyncratic interest in how people did business in the past. This interest has manifested itself in, for example, the great pleasure I took in the financial sections of Braudel. I happen to think that Swift v. Tyson is as interesting as Erie R.R. v. Tompkins.
I’ve been reading a paper that comes closer than anything else I’ve seen to justifying that interest: James Steven Rogers’s The New Old Law of Electronic Money. He argues not just that the 19th-century law of private bank notes offers useful analogies for thinking about electronic cash, but that the 19th-century law of private bank note is directly applicable to electronic cash, thanks to the miracle of stare decisis.
I’m not expert in the law of payment systems (though I’m trying to become more so), but Rogers’s thesis strikes me as well-argued. It’s certainly a well-written article. I’ve interrupted Aislinn’s cardiology studies twice now by reading aloud passages. Traditional legal scholarship may be under attack, but we have here an excellent exemplar of the genre—comprehensive, clear, detailed, and accessible to a reader with almost no background in the subject. It’s by no means a complete introduction to the field, but even the non-legal among you might find it interesting. Greyhame, I’m thinking of you.
I have a new favorite phrase. A legal theory is an “eleven and a half” when it isn’t good enough to survive a Rule 12 motion to dismiss but isn’t so harebrained that it could result in Rule 11 sanctions for the attorney bringing it.
Sample usage: “The contract claim might be going somewhere, but the intentional infliction of emotional distress claim is a real eleven-and-a-half.”
At the request of readers (well, okay, at the request of a reader), I’ve added an Atom feed of recent comments.
When you put it all together, the story become clear: an outsider makes one edit to add a chunk of information, then insiders make several edits tweaking and reformatting it. In addition, insiders rack up thousands of edits doing things like changing the name of a category across the entire site — the kind of thing only insiders deeply care about. As a result, insiders account for the vast majority of the edits. But it’s the outsiders who provide nearly all of the content.
And when you think about it, this makes perfect sense.
Aaron Swartz, Who Writes Wikipedia?
I received the snottiest invitation I’ve ever seen yesterday. I was asked to join CommonRoom, an “invitation-only network” for affiliates of a few swollen-head universities “and corporate clients.” I poked around, and, well, I’m sorry that I inflated their membership numbers through my curiosity. The site is a mixture of hubris and bad decisions (the two being perhaps related), and I can think of no better way to welcome CommonRoom to the Internet than to MST their press release.
For Immediate Release
Where’s your toilet? I feel an immediate release coming on.
Threat to Google and Yahoo: Harvard & Stanford Users Turn To Safer Waters
Google and Yahoo can breathe easy. A social networking site is all but irrelevant to Google and Yahoo’s core business of information organizaton. And a closed-world social networking site is even less of a threat, since Harvard and Stanford students and affiliates make up an all but infinitesmial fraction of their user base.
MOUNTAIN VIEW, CA — August 24, 2006 — September 1, 2006 could mark the end of the internet as we know it.
Could, but apparently didn’t. It’s September 3, 2006 as I write this. Those of you who’ve been waiting on rooftops for the end of the Internet can come back downstairs and plug your laptops in again.
With the launch of CommonRoom (http://www.commonroom.com), a closed network for Harvard and Stanford students and alumni, some of the most educated and well-connected users on the internet will no longer have to face the unending tide of spam and malware that plague most mailboxes and web sites.
The irony of writing “closed network” and “well-connected” in the same sentence is apparently lost on the CommonRoom team. Whether or not Metcalfe’s Law is true in its full n-squared glory, closing off a network generally reduces its value. The value of networking is in the connections. A few particularly valuable contacts for Harvard alums may also be other Harvard alums, but the ones who only talk to Harvard alums are the ones who wind up plaintively drinking martinis in mid-afternoon as they wait for the phone to ring and count the days until reunion.
And as for the “unending tide of spam and malware,” well, don’t get me started on the way that the CommonRoom invitations appear to be flooding out. Perhaps I’m still gloating about my switch to a Mac, but I haven’t exactly been worrying about malware lately.
CommonRoom users will also have a new place to search for information, free from the clutter of the World Wide Web.
Some of us like the clutter of the World Wide Web. It’s only the most comprehensive and egalitarian medium ever invented.
The system will feature its own keywords, sponsored by its corporate users.
Umm, this is a feature? I’m not even sure what it means to “sponsor” a keyword. Can I not search on one unless I find a sponsor? Or are certain keywords flashed up on the screen, whether you want them or not? “MESOTHELIOMA! Brought to you by Barrett, Green, Wegner, and Prajapati, for all your asbestos lawsuit needs.”
From my exploration of the site, it appears that they feature horizontally scrolling text ads at the bottom of the screen. Major demerits for revisiting a technology that was universally reviled as irritating when it was invented. Perhaps they mean that the scolling ads will be customized based on appropriate keywords. Oh, for joy. A degree from Stanford and you’re wasting your life programming scrolling ads.
At least, I’m assuming that the folks behind this impending train wreck consist of Harvard and Stanford alums. I can’t think of anyone else who would think that this was a good moneymaking prospect. There’s a particularly smug and elitist form of Kool-Aid involved.
Companies can look forward to sending e-mail on corporate letterhead,
Because sending e-mail on corporate letterhead is otherwise impossible, given the current design of the Internet.
marketing to other CommonRoom users,
This sound suspiciously like a business model from the first dot-com bubble. We’ll make billions selling banner ads to each other!
receiving valuable focus group-type feedback from their customers,
Focus group “type?”
and managing crucial internal resources, such as calendars and files, without the need for expensive servers.
I cannot figure out how the business model is supposed to work here, unless companies have some kind of CommonRoom access that is separate and apart from the Ivy League pedigree part of the site. Perhaps there’s an outsourced-IT aspect to it—you can keep calendars and files in a hosted application—and the gimmick is that the same backend also lets you do some marketing to some allegedly influential and desireable customers. This does not strike me as a particularly valuable synergy. Perhaps CommonRoom will be better at managing corporate email and calendars than at offering consumer Inernet social applications, but if the former is the real profit center, than the latter seems like a distraction, at best.
CommonRoom is, in effect, its own internet within the internet, built from the ground up with advanced, highly integrated features that keep people and information safe and secure.
Oh, you mean like AOL?
“We wanted people to be able to trust one another for once,” said Aaron Greenspan, Think’s President and CEO.
Dude, I couldn’t tell you how many Harvard and Stanford people there are whom I don’t trust. Based on what I’ve sen of CommonRoom, Aaron Greenspan (Harvard 2004) may belong on the list.
“The original designers of the internet simply assumed that all users would be trustworthy, and skipped many of the checks and balances that we use in the real world.
Their decision not to hard-wire security to the basic layers of the Internet was, arguably, one of the great strokes of genius that have made the Internet so phenominally successful. It’s called the end-to-end principle. Go look it up. There are appropriate and useful ways to add security back in. These ways include layering secure applications on top of insecure transport layers and establishing partially closed networks that connect to the public Internet only in carefully monitored ways. CommonRoom appears to use some of these techniques. None of which make the basic design decisions of the Internet a matter of ill-advised faulty assumptions.
In response, we’ve combined Think’s security knowledge with an unprecedented network effect of features from our research products.
This is a foul perversion of the concept of network effect. Network effects involve products or services that are more valuable to a customer the more other customers there are. There are terms for the economies enjoyed by producers who reuse their experience building one product to build another, or who combine several products into a more valuable whole. “Network effect” is not such a term. For a company whose business model involves repudiating the network effect of the Internet, this use of language is galling.
There are increasing returns to scale from integration in CommonRoom that you simply can’t find anywhere else.”
Unless, perhaps, you look for them. Google, Yahoo, and Microsoft come to mind.
Fortunately, we’re now at the end of the extended quotation from CommonRoom’s econ-major CEO; the misuse of economics jargon should now drop back to its normal background radiation level.
Among its many features, CommonRoom allows members to send and receive spam-free e-mail;
This assertion deserves close scrutiny. Like many other social networks, CommonRoom allows users to send each other messages internally. Many other social networks have had significant spam problems. I wish CommonRoom better luck. I doubt that their choice of user populations will do much good; perhaps the smaller overall size of their userbase will.
CommonRoom also features a hilariously broken feature to send email to a regular old user of the Internet at large. You can enter an Internet-style email address and compose a message (in a window thirteen lines tall with fifty-five characters per line). Your recipient receives, however, not your message, but an invitation to join CommonRoom. It would appear that you are meant to send messages to your non-Ivy buddies and then laugh uproariously as they attempt to sign up for a site that will not let them join without a stanford.edu or harvard.edu email address.
And, oh yes, the site proudly explains that to keep CommonRoom spam-free, you can send messages out but not have messages sent in. Thus, even if they were actually to deliver the messages you send to your Internet friends, your friends would be unable to reply.
When it comes to designing a communications medium, you can have useful or spam-free. Pick one.
buy, sell and trade books and items; plan events and courses; review companies and professors; and share information with each other in the form of blogs or electronic published works.
It’s just like the Web, except you have to wait for a central authority to sign off on each form of commerce and information-sharing. I can’t wait.
In addition, CommonRoom solves many of the privacy problems that social network users face, by allowing people to keep multiple, separate profiles for school, work and family.
I’m sorry, I’ve been laughing so hard that I’m having trouble breathing.
Only someone who doesn’t understand the nature and scale of the privacy problems faced by social network users could write that sentence. Separate profiles will do little, if anything, to prevent: * Large-scale data-mining * Government demands for user data * Corporations asking recent graduates to look at the school profiles of potential recruits * Copying of profile information into public settings * Data exposure caused by bad security practices * People revealing sensitive information about others
The system will be invitation-only, seeded by Think’s exclusive networks of Harvard students and alumni in Europe, China, San Francisco, Los Angeles and the Middle East.
I guess I was lucky to be invited, given that I’m not in any of these places. Perhaps I should be more grateful to my social networking benefactors.
The September launch will also mark Think’s first attempt to expand its reach beyond Harvard,
to include Stanford affiliates.
Prior to CommonRoom, Think depended on technology from internet giant Yahoo to maintain its contact networks.
News Flash! Yahoo threatened by plagiarized imitation of Yahoo technology!
The CommonRoom software is based upon several Think research projects, including Inbox Island
That’s www.inboxisland.com, which is, as of this writing, non-functional. It was (or perhaps is) an attempt to do web-based email without SMTP. The centralization helps with anti-spam efforts. It also undermines perhaps SMTP’s greatest strength—its decentralization. Note that while InboxIsland’s site is down, you’re completely and utterly hosed. No email, sorry.
and the revolutionary web-based houseSYSTEM technology, which started the on-line face book craze at Harvard University in 2003.
The 2003-04 facebook craze at Harvard is actually a matter of some historical dispute. houseSYSTEM competed with the now better-known Facebook. Some of the details can be found in Greenspan’s self-published e-book Authoritas, about his Harvard years. Chapter 41 has a fair amount on the Facebook/houseSYSTEM fracas.
Greenspan developed houseSYSTEM there before graduating early.
One of the great roles of college is in the formation of character. Although early graduation makes great sense for students for whom college is a financial burden or for whom a life-changing opportunity is impending, my general sense is that most students are better-served by the increased maturity that another year of college brings, even if they may not feel that they are learning particularly much in their classes.
Product trailers for CommonRoom are available on-line through September 1, 2006 at http://www.commonroom.com. Corporate customers and advertisers can find pricing information and rates on-line at: http://www.thinkcomputer.com/software/commonroom/index.html
The scrolling ticker at the bottom of the screen starts at $999.00.
I’ve been a little harsh in my words, I must admit. There isn’t much inherently wrong with CommonRoom. I think it’s a me-too social networking site entering a painfully crowded space and offerling no significant innovation to differentiate itself. I expect it to fail, just as most of its competitors will. These are flush days—a second bubble, if you will—and a lot of questionable applications and services are being launched. CommonRoom is no more offensive than most. Those who understand how to work will the forces that make the Web thrive may prosper, those who swim against the current almost certainly won’t.
No, what gets my dander up is the pretention involved. Yes, the networking is part of what makes your Stanford degree so valuable. But most of that networking is implicit. It comes out that you and a colleague both went there, so you fall to talking about professors and sports. One generally doesn’t go out looking solely to hire Harvard students (except perhaps for certain Wall Street firms, but that madness is another story). The degree is a recommendation, but it’s not the first or even the primary basis on which people develop contacts.
Selling a social networking service to Ivy League college students is one thing; like college students everywhere, they have rich social lives in a fairly well-defined universe. They may well appreciate school-specific customization. But selling a general-purpose Internet-replacement social networking service to Ivy League graduates is straining the concept past its reasonable limits. Even Ivy League dating services set off the weird alarms for a lot of people. A separate Ivy League Internet would be off the nuttiness charts.
Yes, there is a camaraderie among graduates. But the Internet is one of the great unviersalizing democratizing forces of our age. To think that we should turn our back on its values in favor of school spirit, to think that school spirit trumps the Internet … that’s the kind of snobbery that inspires people to crack Harvard jokes.