If you think that you can think about a thing, inextricably attached to something else, without thinking of the thing it is attached to, then you have a legal mind.-Thomas Reed Powell

Something similar might be said about mathematicians. On one account, the art of mathematics consists in taking sucessive abstractions of reality until the abstractions themselves shine with an inner beauty.

Beauty aside, the precise relationship between the language of mathematics and the "objects" purportedly described by that language is hotly contested. Philosophers, linguists, and semioticians have produced numerous, mutually contradictory, explications of the distinction between words and things, between signs and signifieds. Such debates have a labyrinthian quality: there exists no linguistic hair so finely split it cannot be split further.

Mathematics itself, though, has some interesting points to make about the limits of description. The basic question at stake here -- "what might we mean when we say 'foo'?" -- can be formalized, and mathematicians have extracted some interesting results by training their formal tools on themselves.

Let us consider a simple example, drawn from constructions ("universal algebra," for the curious) common in theoretical computer science. We would like to give a precise mathematical description of a light bulb. To the computer scientist, the crucial question to ask is which properties of the light bulb we care about. Very well: we can tell whether or not it's on, and there's a switch on the wall we can flip.

To be more precise, following the usual practice, we introduce a formal language for talking about light bulbs. This language has the following basic 'words'

- "on" -- this word represents a light bulb we know to be in the on position, shining brightly into the room.
- "off" -- this word represents a light bulb that's off, sitting forelornly in the dark.
- "flip(_)" -- this word represents the action of flipping the wall switch, changing the light bulb from off to on, or vice-versa.

From these words, we form new 'light bulb terms' by stringing together these words in a few precisely specified ways. We can either

- write down a primitive term -- either "on" or "off" --, or
- take an existing term L and form "flip(L)".

The idea is that "flip(_)" is actually a function: given a light bulb, flipping the switch yields something which is still a light bulb, albeit one in some (potentially) different state. The following are all 'light bulb terms' of this language:

"on" , "off" , "flip(on)" , "flip(flip(flip(off)))"

To complete the model, we add the following two rules, which build in our knowledge of how light bulbs actually work:

- "flip(on)" = "off"
- "flip(off)" = "on"

Thus, we should be able to derive such profound results as

- "flip(flip(off))" = "flip(on)" = "off"

so that, as we know, flipping the switch twice leaves a dark room dark.

The key point here is that "=" is a purely FORMAL relation: the rules that let me conclude that double-flipping is futile are really rules about how to manipulate odd-looking strings of words. I'm allowed to erase "flip(on)" anywhere it appears and replace it with "off," WITHOUT thinking at all about what "flip," "on," and "off" mean.

The fun part comes when we start to ask about possible meanings. We say that something (possessed of a bunch of sub-things) is a "model" for our formal system if that something supplies us with:

- some sub-thing called OFF
- some sub-thing called ON
- an operaton, called FLIP, which takes any given sub-thing into some other sub-thing

where OFF, ON, and FLIP satisfy the above rules in the sense that:

- applying FLIP to the sub-thing ON gives us the sub-thing OFF
- applying FLIP to OFF gives us ON.

Note that "on" and ON aren't quite the same thing. The former is a symbol of our formal system. The latter is some particular 'real' light bulb that's on. There's an 'obvious' relationship between the two: the one is supposed to stand for the other, but the goal of all this formalization is to be precise about that 'obvious.' Every possible result in our formal system -- such as "flip(flip(off))" being the same as "off" -- turns into a corresponding result in the model: FLIP applied to the result of FLIP'ing OFF gives us back OFF again.

Why be so abstract about what a model is? You see, there are a
*lot* of different possible models for this system. The lamp by
my desk is one: ON is the state in which power is flowing to the
100-watt bulb, OFF is the state in which it isn't, and FLIP is
what happens when I turn a knob. But the traffic light out my
window is another model: ON is the state it's in when it's red,
and OFF is the state it's in when it's green, and FLIP is what
happens when it goes from red to green or from green to red.

Green to red? What happened to yellow? Oh, it's in there. Like I said, FLIP is what happens when it goes from green to red: I'm thinking of FLIP as the complete transition.

Objection from the floor: but what happens if we FLIP a yellow light? Oh, let's say we get a yellow light again. Or maybe we get a green light. From the perspective of the formal system, IT DOESN'T MATTER. The traffic light has a richer structure than we need for it to be a model of a light bulb. Once a model satisfies the basic requirements of your formalism, anything else it does is its own business: it can run a gambling joint on the side, fry eggs, daydream about paragliding, whatever.

In fact, it's pretty easy to produce an infinite number of distinct models for the above formal system. The "regular" light bulb has two states; the traffic light has three. Imagine a light bulb with four states: ON, OFF, Stan, and Ollie. Or one with five: ON, OFF, Groucho, Chico, and Harpo. Where FLIP of Groucho is Groucho, and FLIP of Chico is Harpo, and FLIP of HARPO is ON. And then there's some light bulb with six states, and on and on and on.

One way of looking at this is to say that mathematics is quite
explicit about the limitations of its own language. I can, in
some sense, actually quantify the uncertainty in the precision
of my words. When I say "ON," I could mean this, or that, or
this other thing. I am capable of reasoning about whatever it
is I'm describing, in the sense that I know that if I FLIP my ON
twice, I'll have the same thing. So I can draw meaningful
conclusions, without every *really* pinning down the exact
reference. I'm NOT talking about a broken light bulb, in which
FLIP of OFF is OFF (and not ON). Math is good as far as it
goes, which is far enough to talk about how far it goes.

Now, that said, one amazingly cool construction, which comes up
all over the place in math, results if we go back and look
closely at the distinction between "on" and ON. What if we were
to let ON be the **string** "on?" Similarly, we'll let FLIP be
the act of taking some string S, writing "flip(" in front of it,
and then writing ")" after it. So, given, for example,
"flip(on)"
we can apply FLIP to it to get
"flip(flip(on))"

That is, we're using the 'light bulb terms' of our formal system
as a model for the system. It's a bit confusing, but worth
thinking through closely. We have a formal system, some set of
rules for maniuplating strings. These strings can be
construed as representing various things, but one of the things
they can represent is *themselves*.

As a construction, it has two very nice properties. First, it
can be applied to *any* formal system constructed along the
basic lines used above. If you can write down your system
precisely, you can reason just as precisely about the writings.
And second, the model you get is in some sense 'natural,'
because you're guaranteed that nothing extraneous will show up
in it. It won't contain any yellow or Chico that you didn't
mean to be included. It's almost tautological, but profound:
this 'syntactic' model perfectly corresponds to the formal
system precisely because the formal system lacks the terminology
to describe the things it doesn't describe.

The two names most closely associated with this remarkable insight are Godel and Herbrand. Godel used it in his Completeness Theorem, raising hopes among other mathemeticians about the power of mathematics, hopes he cruelly smashed those hopes the next year with his Incompleteness Theorem. Herbrand used it to create the field of proof-by-computer, by showing that mathematical problems could be reduced to purely syntactic questions, to which mechanical proof techniques could be applied.

Of course, he did this ten years before there were computers, which really only adds to the accomplishment, when you think about it.