This is an archive page. What you are looking at was posted sometime between 2000 and 2014. For more recent material, see the main blog at http://laboratorium.net
Life . . . is not a symbol, is not one riddle and one failure to guess it, is not to inhabit one face alone or to be given up after one losing throw of the dic; but is to be, however inadequately, emptily, hopelessly into the city's iron heart, endured.
-- John Fowles
The background here is that the kinds of things economists want to measure in the wild -- say, wages, unemployment, spending patterns, crime, or any number of important trends -- are changing all the time. If a few states alter their welfare laws in a particular way, you can't just compare their unemployment rates before and after the change, becuase you might be misled by national changes in overall unemployment. On the other hand, you can't just compare states that enacted these changes to other states directly, because they might have had different unemployment rates to begin with (and this difference is likely have been a factor in legislative decisions).
What you have to do is complete out the box. You make a "difference-in-differences" comparison, by looking at the changes in the important quantity in both your experimental and control groups. That is, you treat the before-to-after changes in your control group as a baseline, and see how much the before-to-after changes in your experimental group deviate from this baseline. Where your experimental and control groups are both sizeable, you can use standard statistical tools to see how much of the before-to-after changes are attributable to the change whose effects you're trying to measure.
There is nothing wrong with this procedure as I've described it. But, in practice, difference-in-differences often papers go one step further. They look at changes in the "dependent" variable as a time series. That is, if the legal changes were enacted in 1987, say the authors, and we have data every year from 1980 to 2000, we should try to fit a curve to that data. We weight every year equally, use standard statistical heuristics (say, least-squares regression) to fit a curve, and then compare control curves with experimental curves. Surely this technique gives us a more finely-tuned result than just lumping together "before" and "after" data for each state?
Actually, no, say the authors of the paper Jeremy showed me. The problem is that -- especially for data measured at the state level -- there are strong year-to-year correlations that have nothing to do with anything susceptible to manipulation. Treating each year as an indepndent measurement vastly overstates the useful information present in the time series. The years aren't independent measurements randomly clumped around some "true" underlying value. The year-to-year "error" (which is actually the normal economic noise induced by all the things economists don't understand) tends to persist and replicate itself. In the context of a difference-in-differences measurement, disaggregating the time series tends to mistake these persistent effects for genuine changes caused by something. Lumping all the "before" and "after" data together brings the problem back to the simple two-by-two matrix that difference-in-differences is designed to handle.
As a particularly striking demonstration of the trap, the crowd who wrote w8841 tried running difference-on-differences on random groups of states. That is, they pulled 25 state names out of a hat, and pretended that these 25 states had enacted a placebo "law." They then looked at the difference-in-differences effect of this placebo law on female wages, using the standard, cookbook, statistical treatment I described above. The result: almost half the time, they found an "effect" traceable to their law's passage.
I'm curious whether this discovery will induce a wave of retractions in the economics world. How many recently-publicized results depended on this particular piece of shaky methodology? What else that we think we know is wrong?
In any case, my glorious obsession was Second World War naval aviation. If it took off from an aircraft carrier and had a propellor, I was into it. I didn't like battleships (although some carried float planes, which made them a little bit cool), and I didn't like jets (on some level, they felt like cheating to me). No, my interest was in the great carrier battles of the Pacific.
Compared with the long years of land warfare in Europe, in which every mile of ground was fought over, was bled upon by dogface GIs who literally walked from one end of the continent to the other, the sea war in the Pacific might as well have taken place in an alternate universe. Carrier task forces stalked each other in an uneasy symmetry, each trying to learn the other's location first. A few dozen pilots held the battle's outcome in their hands. Their jobs? To fly a few hundred miles for the purpose of giving an explosive-tipped piece of metal a very specific velocity and direction. I suppose the contest of nerves and wits involved in running these blindfold duels appealed to me more than the martial glory involved in shooting people and blowing things up. After all, I liked the codebreakers, too.
The great high point of my obsession, of course, was the battle of Midway in June of 1942, which I've always considered the turning point of the war. After Midway, the Japanese were permanently on the defensive; with the core of their carrier fleet at the bottom of the ocean, they could no longer keep American landing forces from reaching island after island. I should probably note that, in the context of the overall war, this claim is highly problematic. The importance of Russian victories in 1941 and 1942 are hard to overstate, and British success in holding Egypt and keeping shipping lanes open was also probably as significant, all in all, as the "miracle at Midway." (The phrase is the title of a book by Gordon Prange. My copy is dog-eared.) But, then again, Midway was settled in a matter of minutes, which makes it, on a per-unit-time basis, the most significant battle of the war.
In case it's not already clear, this whole piece is just an excuse for me to retell that story. If you're not interested, you may as well skip ahead to the next entry. That one's about Richard Powers, and it's much shorter.
In early 1942, when things were not at all cheerful for the United States Navy, the cryptographic wing in Hawaii managed to crack JN25, one of the most top-secret codes of the Japanese Navy, well enough to decrypt a fair amount of intercepted radio traffic. From these fragments, the crypto boys determined that a major operation was being planned for June, whose target was to be some location code named "AF."
Suspecting that "AF" was Midway Island, the crypto wing proposed a little experiment. The commander of the Marine base on Midway was instructed (by a suitably secure channel) to radio, using a junky low-level code, that the water filtration unit had broken down. The purifier was fine, but a Japanese signalling post dutifully reported back to Tokyo that "AF" was running short of water. On the strength of this information, Admiral Nimitz ordered everything he had available to Midway, in hopes of ambushing the Japanese carrier strike force.
It should be noted that at this point in the war, the Japanese had won every single engagement they entered, partly from having overwhelming force available, and partly from brilliant on-the-spot tactical decisions. As an ironic result, the naval planners were effectively insulated from any negative consequences of bad operational planning. Their plans had grown progressively more baroque. Pearl Harbor had been audacious, but simple: one task force with one mission launching one attack. The Midway operation involved five separate moving parts, including a wholly superfluous invasion of two obscure Aleutian Islands whose new garrisons then proceeded to wait out the war in complete obscurity. Although the Japanese had four carriers to the American three, they were all but tripping over their own feet in coordinating the operation.
In any event, the Americans knew the Japanese were out there somewhere, but not vice-versa. The U.S. carriers lurked to the north of Midway, expecting the Japanese to show up soon. On the 4th of June, they did, launching an airstrike to soften up Midway's not-very-substantial defenses. A ragtag chain of land-based planes from Midway carried out an ineffectual series of retaliatory attacks on the Japanese carriers through the morning. In the meantime, alerted to the Japanese location, the American carriers launched their own, more substantial strikes.
While the Japanese didn't know that American carriers were in the area, they were taking haphazard precautions just in case, including a search of the area around them with float planes. One of these planes radioed back that it had spotted an American fleet. Admiral Nagumo, commanding the Japanese carrier strike force, had been planning to launch a second strike against Midway to further wear it down, but hearing this news, he ordered the strike force rearmed to deal with the American ships (which would require torpedoes and armor-piercing bombs, rather than the high explosives used for attacking land targets).
But then, the search plane clarified its description to note that the enemy task force didn't include any carriers. No threat there, so it was back to a second strike on Midway. But then, presently, the pilot changed his mind again: yes, there was a carrier after all. Which would be a threat, so it was back to the anti-naval armaments. With all of these sudden mind changes, the crews belowdecks were getting stressed and tired; rather than putting the unused bombs back carefully, they left them casually stacked in the middle of the hangar for later clean-up, once the attack force was away.
But they never got a chance for a later clean-up, because in the meantime, the American carrier-based attacks arrived. The torpedo bombers showed up first, without fighter protection, and were utterly cut to ribbons in a series of brave but futile attacks. One carrier's torpedo-bombing wing suffered 29 killed or missing out of 30 airmen. That said, it's possible to see in their bravery a crucial sacrifice; when the American dive bombers arrived, the Japanese fighter pilots had come down to sea level to engage the torpedo planes. Few eyes and fewer planes were left on the higher altitudes at which dive bombers start their dives. As a result, the dive bombers faced an easy situation: no fighter cover and enemy ships that barely had any idea what was coming. Only a few bombs hit home, but in the below-decks chaos of the rearming, a few was enough. Three of the four flattops were crippled in the space of those few minutes.
From there on in, things were more even-handed. The remaining Japanese carrier, the Hiryu, got off two sorties, enough to cripple the Yorktown, before the American follow-up strike took care of the Hiryu as well. The invasion of Midway was scrapped, but this fact was incidental compared to the destruction of the Japanese carriers. The net result was to leave the U.S. with a substantial absolute numerical superiority in naval aviation. The American invasion of Guadalcanal, in August, was made with carrier cover, a pattern that endured for the rest of the war.
Even in this version, I'm leaving out so many great episodes. The astonishing repair job on the Yorktown at the end of May, the equally astonishing sinking of the Yorktown by a submarine in the right place at the right time, the endless disputes over who sank which carrier, the ungratefulness with which Washington brushed aside Nimitz's plea for commendations for the cryptographers, the brilliance of some of the small-scale tactical decisions made by American pilots, the stupidity of inter-service rivalries, and the unbelievable arrogance of the Japanese Naval General Staff (the initial war-game simulation of the operation didn't come out right, so they refloated simulated Japanese ships until they got the result they wanted. Sound familiar?) There wasn't anyone who told me these things -- just books -- but they're part of my folkloric heritage, or it feels that way. I think that one of the first times I cussed was in imitation of my heroes, the pilots in the history books I gobbled up.
I suppose my particular obsession also explains part of my reaction to the Greatest Generation wave of nostalgia. Back at the half-instinctual childhood level at which anything we truly believe is absorbed, I believe in the remarkable heroism of these ordinary Americans. Prange's passage about the men of the Hornet's doomed torpedo bombing squadron is remarkable; their backgrounds were as diverse as anything Holywood ever crammed into a World War II movie.
At the same time, those movies ring false to me because they don't resonate with the particular images I associate with this heroism. My boys were steering tin cans around the wide and empty ocean, not crouching in foxholes. Which is to say, I think I can bring a bit of healthy skepticism to my Stephen Ambrose pap, but if you plop me down in front of something with flattops, I go all google-eyed. I played the Midway board game with my dad for years, even past the time when I knew just how many historical inaccuracies it contained (The Yorktown had more torpedo bombers than that! The search model is totally unrealistic!). And I know that Midway is an awful movie, but I love it nonetheless.
There's no point to this piece, but I already told you that. I just wanted to crawl inside my old obsession for a bit and see how well it fits. It's a useful exercise. There are senses in which my more recent obsessions operate the same way. I have to remind myself that not everyone is a law student or lawyer -- which means that not everyone finds cases fascinating the same way that I do. Collateral estoppel is not a miracle in whose reflected light we bask; the non-delegation doctrine is not an issue for the ages. It's helpful to go back a bit, to remember what it was like distinguishing TB-6 from TB-8 and the F4F from the F4U and to see whether I can remember the looks on people's faces when I told them the difference.
I love this guy. (He also has a new novel coming out this winter, which is cause for extra celebration.)
Ghosts are present at the hardening of memory into history.
-- Stephen Greenblatt
Nineteen percent of all Americans think that they are in the top one percent of income-earners. Another twenty percent think that they'll be there someday. That is to say, almost two-fifths of Americans are deeply over-optimistic about their place on the wealth ladder.
By way of reference, it takes about $400,000 in annual income to break the top one percent. This survey tells us, more or less, that people whose household take is over $80,000 think they're in the top one percent and that people whose households make at least $50,000 expect to make the top one percent eventually.
It's not at all clear to me what the political implications are; I know only that there ought to be some implications from statistics this stunning.
Guy Two: Let's get some pizza.
Guy Three: How about if I bash your face in?
The article does give some sense of this fact, albeit inadvertently. The pull quote -- "Simon . . . says of his career, 'The whole game was: Can I get the sounds in my head on tape?'" -- tells you everything you need to know. And in 11 pages of boring meanderings with Simon, including a rehearsal at which Simon tells the other musicians exactly which notes to play, the basic truth emerges: Simon is a composer, born too late. He runs around, looking for improvisation and musicial innovation, then takes it and bottles it into perfect -- and perfectly static -- pop songs. In this precise sense, the charges of cultural imperialism levelled at him might be said to be true. He's an auteur, a classical composer by birth, but abandoned as a baby on the doorstop of a folk musician and raised by rock-and-rollers. Sure, the kid has rhythm, but does he swing?
But it could never be the case that such questions are asked in the article itself. This, is, after all, The New Yorker. And, in classic fashion, they give their music puff piece to a writer, in this case Alec Wilkinson, who doesn't get music. Take this summary of his years with Garfunkel:
They made five albums and then, in 1970, they split up, partly because Garfunkel watned to act in movies. In 1981, they played a reunion concert in Central Park that was attended by half a million people. They considered making another album.
I don't know what to say. Garfunkel wanted to act? To act? You spend one sentence on the breakup and this is what you say? How about that Simon wanted to break away from folk-rock and Garfunkel didn't? I mean, I have heard both Simon and Garfunkel say sharper, more interesting, revealing, things about the breakup. But Wilkinson's instinct is always to avoid the music, because when he writes about music, he doesn't know what to say beyond the narrow technical descriptions.
Take his description of one song: ". . . starts with his playing a briskly repeated pattern of simple chords." This is a perfectly accurate description of "Old," and perfectly useless. What's catchy about "Old" is the quality of his guitar work. The opening stutters; Simon is slapping his strings and cutting his chords off as quickly as he plays them. It's playful, it's a little bit searing, it sets up the double-quick half-funk groove of the song. Millions of songs start with "a briskly repeated pattern of simple chords," many of them by Paul Simon.
What makes this article so frustrating is the knowledge that sometimes, The New Yorker seems to play at this game of assigning writers to subjects outside their traditional kens, and to get it right. Their profile of Radiohead was written by Alex Ross, their classical music critic, but it remains the best article on Radiohead ever written. But that wasn't really this; Ross, after all, knows his music. His distance from the usual cliches of rock writing gave him the freedom to connect their off-stage actions to their music, but his writing was built around the respect a musical critic offers to musicians.
But such is the exception, rather than the rule. If Paul Simon's music is a metaphor for the man, perhaps its polished but predictable quirks are also a metaphor for the magazine.
This observation raises a pressing stylistic point: is there a consistent policy about use of the first person plural here at the Lab? Strangely enough, I think there may well be. Although I haven't gone through my archives to check, I suspect that the editorial "we" is reserved for situations in which I'm writing about general editorial policy or technical matters. That is, "we" refers to everything that would be handled by the intern if we had an intern.
Anyway, since I do the programming for the scripts that make the site work, even though we run those scripts, it should be noted that I've been investigating the problem, and it seems to be something ganging aglay with the FTP put command returning success when, in actuality, it has failed utterly; graphical FTP clients aren't doing any better. For now, I'm hand-uploading files using a one-file-at-a-time SFTP client, but this method of doing things isn't tenable in the long run.
I'm looking for a good Windows SFTP client that can be automated well enough to run interactively from Perl. If you know of one, please get in touch.
My new theory is that you become a lawyer on the day you can argue, with a straight face, that the owner of the nitroglycerine factory shouldn't be liable for the destruction of the building next door.
To be fair, I suppose he disappeared himself. His name was missing from his mailbox and the slot taped over. I had to check the facebook to figure out whose slot it was, but once I had a name and a face, it struck me that I hadn't seen anyone matching that description around in a while. If ever. Maybe he never registered in the first place . . . or maybe I've stumbled across the explanation for the email they sent out saying there was an open dorm room for the spring.
My overall feeling at this discovery is a sort of nverous apprehension, of the sort the characters in horror movies start to feel as they realize what sort of movie they're in. You know, after the first minor character has disappeared but before the body turns up.
I see the world almost exclusively in terms of metaphors, it should be noted. My technique of dealing with a tough concept or a new situation is to find some concept with a parallel internal structure and see if the analogy holds. The more outrageous the metaphor, the better. Harder to forget.
So, if law school is a horror movie, it's also a kung-fu movie, in which we're the bad guys. The Socratic method is a way for the professors take us on one at a time. Even though our combined ideas are enough to thoroughly deconstruct the professor's pet theory, we wind up taking them on one at a time. Between cutting people off before they've completed their thoughts, mischaracterizing their positions, avoiding obvious connections between different people's ideas, and other tricks of equivocation, the bullies behind the lecterns wind up winning every time.
Well, okay. So it's the most enjoyable bullying to which I've ever been subjected. But not everyone reacts the same way to it. One of my classmates stormed out of the class at the break after a particularly egregious incident of being misquoted; I know everyone here feels a bit of that anger from time to time.
I wonder if maybe it was the manipulation-as-education that got to our former colleague.
Last night's strange half-dreams were par for the course. There was one about being part of some international society for dealing with illness among those presenting papers at conferences. It was very self-referential and only tangentially tied in to the fact that I was actually lying in bed with a cold.
And then there was the one in which someone was warning me not to step out of line, because first they send a policeman after you, and then they send a bunch of police, and then they bring in the police helicopter, and if that doesn't work, they bring in the ninja helicopter.
I realized yetserday that my experience of law is synaesthetic; I experience it as a jumble of shapes and colors. My torts book is blue; so are torts themselves. The exact shade varies with the kind; strict liability cases are a deep blue, almost navy, but contributory negligence is a pale sky blue.
Here in this hotbed of theory, it's common talk about how the choice of legal rules can completely change the way we look at a problem. But I see these choices in vivid color: a wave of yellow one one side and a wave of green on the other, surging over each other and swirling about. Underneath it all runs a spider-web of grey-black procedural rules which occasionally flash bright red.
It's profoundly beautiful, but I'm not sure whether it signifies anything other than a lack of sleep.
A lot of people have been getting worked up about copyright. Getting worked up is good. There's plenty to get worked up about, every enlightenment has to begin somewhere, and it's not like copyright's abuses aren't an essential part of the Problem. But I can't escape the feeling that, politically, copyright may be a Hempfest issue.
We're going to be nostalgic for the days when bad copyright laws seemed like big problems. Oh, so nostalgic.
Dave Barry said it best. We are not making this up.
The piece you're looking for is here -- or just scroll down. It just hasn't been linked to using the standard Lab permalink format.
Of course, this is entirely my fault, since I never advertised that the Lab has permalinks. I'm still trying to figure out why you don't see a hand cursor when you mouse over the green boxes, but they do have the right URLs to link specific pieces.
Anyway, enjoy your visit, and please don't feed the animals.
I've had one serious omission pointed out to me already; I'm sure there are more on the way. But still, I like it. Writing it helped me figure things out -- which has always been one of the great underappreciated virtues of writing. You can think any damn fool think you want, but if you want to explain it to others in complete sentences, you have to push through your thoughts at least far enough to organize them in a passable fashion. Writing helps me think through the implications.
In the present instance, the opinion itself is a distraction, a shell game if you will. But thinking about why the opinion seems so irrelevant turns out to be quite helpful in thinking about the computer industry and the future of consumer freedom in a Microsoft world. Or, at least, I think so.