I’m also guest-blogging this week at Concurring Opinions, as part of a symposium on Samir Chopra and Laurence White’s A Legal Theory for Autonomous Artificial Agents.
The basic question LTAAA asks—how law should deal with artificially intelligent computer systems (for different values of “intelligent”)—can be understood as an instance of a more general question—how law should deal with complex systems? Software is complex and hard to get right, often behaves in surprising ways, and is frequently valuable because of those surprises. It displays, in other words, emergent complexity. That suggests looking for analogies to other systems that also display emergent complexity, and Chopra and White unpack the parallel to corporate personhood at length.
One reason that this approach is especially fruitful, I think, is that an important first wave of cases about computer software involved their internal use by corporations. So, for example, there’s Pompeii Estates v. Consolidated Edison, which I use in my casebook for its invocation of a kind of “the computer did it” defense. Con Ed lost: It’s not a good argument that the negligent decision to turn off the plaintiff’s power came from a computer, any more than “Bob the lineman cut off your power, not Con Ed” would be. Asking why and when law will hold Con Ed as a whole liable requires a discussion about attributing particular qualities to it—philosophically, that discussion is a great bridge to asking when law will attribute the same qualities to Con Ed’s computer system.
But corporations are hardly the only kind of complex system law must grapple with. Another interesting analogy is nations. In one sense, they’re just collections of people whose exact composition changes over time. Like corporations, they have governance mechanisms that are supposed to determine who speaks for them and how, but those mechanisms are subject to a lot more play and ambiguity. “Not in our name” is a compelling slogan because it captures this sense that the entity can be said to do things that aren’t done by its members and to believe things that they don’t.
Mobs display a similar kind of emergent purpose through even less explicit and well-understood coordination mechanisms. They’re concentrated in time and space, but it’s hard to pin down any other constitutive relations. Those tipping points, when a mob decides to turn violent, or to turn tail, or to take some other seemingly coordinated action, need not emerge from any deliberative or authoritative process that can easily be identified.
In like fashion, Wikipedia is an immensely complicated scrum. Its relatively simple software combines with a baroque social complexity to produce a curious beast: slow and lumbering and oafish in some respect, but remarkably agile and intelligent in others. And while “the market” may be a social abstraction, it certainly does things. A few years ago, it decided, fairly quickly, that it didn’t like residential mortgages all that much—an awful lot of people were affected by that decision. The “invisible hand” metaphor personifies it, as does a lot of econ-speak: these are attempts to turn this complex system into a tractable entity that can be reasoned about, and reasoned with.
As a final example of complex systems that law chooses to reify, consider people. What is consciousness? No one knows, and it seems unlikely that anyone can know. Our thoughts, plans, and actions emerge from a compelx neurological soup, and we interact with groups in complex social ways (see above). And yet law retains a near-absolute commitment to holding people accountable, rather than amygdalas. By taking an intentional stance towards agents, Chopra and White recognize that law sweeps all of these issues under the carpet, and ask when it becomes plausible to sweep those issues under the carpet for artificial agents, as well.