How Law Responds to Complex Systems

Cross-posted from Concurring Opinions

In my first post on A Legal Theory for Autonomous Artificial Agents, I discussed some of the different kinds of complex systems law deals with. I’d like to continue by considering some of the different ways law deals with them.

Chopra and White focus on personhood: treating the entity as a single coherent “thing.” The success of this approach depends not just on the entity’s being amenable to reason, reward, and punishment, but also on it actually cohering as an entity. Officers’ control over corporations is directed to producing just such a coherence, which is a good reason that personhood seems to fit. But other complex systems aren’t so amenable to being treated as a single entity. You can’t punish the market as a whole; if a mob is a person, it’s not one you can reason with. In college, I made this mistake for a term project: we tried to “reward” programs that share resources nicely with each other by giving them more time to execute. Of course, the programs were blithely ignorant of how we were trying to motivate them: there was no feedback loop we could latch on to.

Another related strategy is to find the man behind the curtain. Even if we’re not willing to treat the entity itself as an artificial person, perhaps there’s a real person pulling the levers somewhere. Sometimes it’s plausible, as in the Sarbanes-Oxley requirement that CEOs certify corporate financial statements. Sometimes it’s wishful thinking, as in the belief that Baron Rothschild and the Bavarian Illuminati must be secretly controlling the market. This strategy only works to the extent that someone is or could be in charge: one of the things that often seems to baffle politicians about the Internet is that there isn’t anyone with power over the whole thing.

A subtle variation on the above is to take hostages. Even if the actual leader is impossible to find or control, just grab someone the entity appears to care about and threaten them unless the entity does what you want. This used to be a major technique of international relations: it was much easier to get your hands on a few French nobles and use them as leverage than to tell France or its king directly what to do. The advantage of this one is that it can work even when the entity isn’t under anyone’s control at all: as long as its constituent parts share the motivation of not letting the hostage come to harm, they may well end up acting coherently.

When that doesn’t work, law starts turning to strategies that fight the hypothetical. Disaggregation treats the entities as though it doesn’t exist — i.e., has no collective properties. Instead, it identifies individual members and deals with their actions in isolation. This approach sounds myopic, but it’s frequently required by a legal system committed to something like methodological individualism. Rather than dealing with the mob as a whole, the police can simply arrest any person they see breaking a window. Rather than figuring out what Wikipedia is or how it works, copyright owners can simply sue anyone who uploads infringing material. Sometimes disaggregation even works.

Even more aggressively, law can try destroying the entity itself. Disperse the mob, cancel a company’s charter, or conquer a nation and dissolve its government while absorbing its people. These moves have in common their attempt to stamp out the complex dynamics that give rise to emergent behavior: smithereens can, after all, be much easier to deal with. Julian Assange’s political theory actually operates along these lines: by making it harder for them to communicate in private, he hopes to keep governmental conspiracies from developing entity-level capabilities. For computers, there’s a particularly easy entity-destroying step: the off switch. Destruction is recommended only for bathwater that does not contain babies.

When law is feeling especially ambitious, it sometimes tries dictating the internal rules that govern the entity’s behavior. Central planning is an attempt to take control of the capriciousness of the market by rewiring its feedback loops. (On this theme, I can’t recommend Spufford’s quasi-novel Red Plenty highly enough.) Behavior-modifying drugs take the complex system that is an individual and try to change how it works. Less directly, elections and constitutions try to give nations healthy internal mechanisms.

And finally, sometimes law simply gives up in despair. Consider the market, a system whose vindictive and self-destructive whims law frequently regards with a kind of miserable futility. Or consider the arguments sometimes made about search engine algorithms — that their emergent complexity passeth all understanding. Sometimes these claims are used to argue that government shouldn’t regulate them, and sometimes to argue that even Google’s employees themselves don’t fully understand why the algorithm ranks certain sites the way it does.

My point in all of this is that personhood is hardly inevitable as an analytical or regulatory response to complex systems, even when they appear to function as coherent entities. For some purposes, it probably is worth thinking of a fire as a crafty malevolent person; for others, trying to dictate its internals by altering the supply of flammables in its path makes more sense. (Trying to take hostages to sway a fire is not, however, a particularly wise response.) Picking the most appropriate legal strategy for a complex system will depend on situational, context-specific factors — and upon understanding clearly the nature of the beast.

UPDATE: Whoops. I mistakenly posted an earlier draft. The whole thing is here now.