Why Can’t the English Learn How to Speak?


And why can’t very smart legal academics get their Internet architecture arguments right? Solum and Chung’s generally quite good paper The Layers Principle: Internet Architecture and Law (draft online here; the published version is 79 Notre Dame L. Rev. 815 (2004)) is a cogent (albeit unnecessarily long) explanation of layering principles in the design of the Internet, sensible Internet regulation, and law in general. Modularity is good, layering is a smart and general form of modularity, and they provide reasonable arguments for why law should try to cut with the grain and focus on one layer at a time, rather than against the grain by trying to affect one layer by tinkering with another. But in the process of trying to distinguish layering from the end-to-end (or “stupid network with smart endpoints”) principle, they write:

Conceptually, the layer separation does not follow from the end-to-end principle, because the end-to-end principle doesn’t tell us to separate the TCP, IP, and physical layers, whereas the end-to-end principle does follow from the separation of the application layer form the lower network layers.

Wrong. The core claim here is that you need a layering principle to distinguish TCP, IP, and physical layers, and that end-to-end just won’t suffice. Watch closely as I disprove the claim by counterexample:

The physical level—the actual wires and wireless links that connect computers in a network—is hardware; everything else is software. You have to distinguish the physical layer from everything else; they’re just made of different stuff. That’s something like a law of nature.

That brings us to the IP layer. Having one common packetized routing protocol is a fundamental design decision of the Internet. IP is what makes it the Inter-net; it’s the lingua franca that different networks speak to enable packets to flow freely among them. That decision doesn’t involve a layering principle so much as a universality principle. One could have networks with all sorts of different packet-routing protocols; the decision to standardize on IP involves a decision to make packet routing the same everywhere.

IP is sufficient to get a lossy, non-ordered, unacknowledged stream of datagrams from point A to point B anywhere on a connected network, passing through points E, I, O, U, and sometimes Y along the way. Since many applications would like in-order delivery, guarantees that no datagrams have been dropped, and/or acknowledgment that the datagrams have gotten through, the question then arises of where to implement these “transport” functions.

One could put those functions in the network. The routers in the middle could perform reordering and guarantee delivery at each hop. But the end-to-end principle says not to do this. The end-to-end principle says you should shove these transport functions into the endpoints: computers A and B, rather than the vowels in between. That way, if A wants guaranteed delivery, it can ask B for acknowledgments and retransmit if a packet never gets through. This is where TCP comes in. But note that since IP is “spoken” at every node in the network, whereas these transmission features are only relevant to the endpoints, of course you need a new layer for them. The layering of TCP over IP therefore flows from the end-to-end principle every bit as much as vice-versa.

The point here is that there is a fundamental distinction between layering in the set of protocols spoken within the network, and layering in the set of protocols spoken over the network by the endpoints. The latter are layered on top of the former in consequence of the end-to-end principle. Crossing between network-level and endpoint-level layers creates serious transparency problems and leads to the kind of regulatory trouble that Solum and Chung worry about. But layer-crossing purely within the network-level layers can be more readily justified by technical exigencies. And layering within the endpoints is often simply a matter of engineering convenience; A and B layer on top of TCP for most applications but are likely to switch to UDP for their gaming.

This point is made, in a somewhat different context, by Michael Walfish et al in Middleboxes No Longer Considered Harmful:

Note that this last claim satisfies network-level layering but allows violations of higher-level equivalents, e.g., an explicitly addressed firewall that looks at application payloads upholds the rules just given but flouts application-level layering. In general, this paper claims that DOA improves on the status quo by restoring network-level layering but does not insist that intermediaries adhere to higher-level layering. Why not? Higher-level layers define how to organize host software, and one can imagine splitting host software among boxes using exotic decompositions. Defining both higher-level layering and an architecture that respects these higher layers is a problem that requires care and one we have left to future work. In the meantime, we believe that hosts invoking intermediaries should decide how best to split functions between them and their intermediaries.

Thus, Solum and Chung are correct that the separation of application from TCP (and thus from all lower-level protocols) is not required by end-to-end and must be explained as an instance of layering. But there is little regulatory significance in the distinction between application and TCP layers. I’m not aware of any government proposals to regulate the contents of the TCP implementation on anyone’s desktop or laptop computer. When they worry about layer-crossing, they’re really concerned with network intermediaries (who should only be speaking IP and lower-level protocols) sneaking a peek at the contents of the higher-level protocols (that should only be seen by the endpoints).

Their examples are good. Their general point is good. But in their zeal to emphasize the important work that layering does, they go a step too far. It’s not layering per se that matters for policy purposes; it’s the interaction of layering with ubiquitous intermediaries. It’s important and useful to recognize that intermediary censorship involves layer crossing. But that doesn’t mean that all forms of layer crossing raise the same issues. By emphasizing layering qua layering, Solum and Chung put too far aside the critical distinction between layers that intermediaries are expected to work with and layers that they should ordinarily leave alone.

This has been my rant for the day, although Aislinn informs me that it’s better to rant about many small things than about one big thing. She is willing to concede, however, that I may be a ranting specialist while she’s more of a ranting general practitioner.


The physical level—the actual wires and wireless links that connect computers in a network—is hardware; everything else is software. You have to distinguish the physical layer from everything else; they’re just made of different stuff. That’s something like a law of nature.

The physicist in me says you’re wrong here. It’s all hardware, all the time. “Software” is just a useful high level abstraction for thinking about things as, say, an open TCP socket or an RSS reader, rather than just a particular useful arrangement of electrons or a certain pattern of magnetic fields on a platter.

Maybe this sounds like I’m just being pedantic, but I think this is actually a fairly important point. The different layers are not only not made of fundamentally different kinds of stuff, they’re not even different specific atoms. The very same set of atoms can be simultaneously an HTTP connection, a TCP socket, an IP packet, an Ethernet frame, and a piece of CAT5 cable. But nobody’s brain can think about HTTP connections in terms of the motion of electrons back and forth down a CAT5 cable. So layers are a useful abstraction for thinking about telecommunications —- just as the species is a useful abstraction for thinking about organisms in biology, or as the organism itself is a useful higher-level construct of molecules. But software is still always made out of atoms, no matter how high level - just as your favorite cat is still made out of molecules, no matter how cute and fuzzy the end result. :-) It’s hard to point at a molecule and say “that’s where the cute comes from”, just as it’s hard to point at patterns of electrons and talk about TCP connections, but the presence of these emergent properties doesn’t require invoking the presence of any fundamentally different stuff.


The philosopher in me agrees with the physicist in you that all of these abstractions are equally unreal. But both of them are wrong on a practical level until we get widespread functional nanotechnology. Until then, anything at the physical layer is much harder to work with than anything at any other layer.