The principle of absolute freedom of speech has embedded in it a basic premise that words do not inherently mean anything. This wasn’t obvious to me and felt surprising, in the this-must-be-wrong way, when I noticed it. But if we agree that there exist things so bad that they should be prohibited, like murder, and simultaneously take that there is no sequence of words that are so inherently bad that they should be prohibited, then we must subscribe to a view that words-in-themselves must not, in the general case, map to anything imbued with inherent meaning.
But of course this isn’t completely true — blackmail, shouting fire in a crowded theater, etc — and so we do accept some limits on speech to try and balance these equities. And things like the Q Anon phenomenon are fascinating: for any observation you can make about a topic they care about, a Q adherent can make a corresponding one that ends up with a completely different conclusion. This is weird! It gets at deep problems in epistemology for this to be possible.
I’m not really interested in the gradations of freedom of speech, though; what I’m actually interested in is the more basic question of what is meaning in the first place? Like, what at the deepest level makes blackmail different from coordinating dinner plans?
A physical account of the origin of a sense of meaning has eluded us. Indeed, within the first two paragraphs of Shannon’s seminal 1948 information theory paper, he notes explicitly:
Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages. (Shannon 1948)
The field that studies the emergence of meaning, loosely, is semiotics. Though — or maybe because — the idea of meaning seems to be so obvious and basic that it needs no further explanation, it has proven exceptionally slippery to pin down, despite a vast literature dating all the way back to John Locke. My opinion is that a satisfying understanding of meaning must be a physical theory, since in a physical universe it is distressing to see things appear that do not seem to have any basis in physics. (It isn’t a theory of everything if it doesn’t actually describe everything.) Semiotics has not, as yet, produced an explanation that uses equations to give a physical account for meaning, and so I see an unresolved problem.
It is helpful to reduce things to a minimal toy problem. Consider the two following statements:
My thinking on the emergence of meaning follows from a simple observation that meaning drags world lines. The first statement above is less likely to be associated with action than the second statement. It is important to understand that GR only gives an account of particles in free-fall and QFT only really describes unitary evolution from defined conditions. (Measurement in QFT is really only a way to sample out an instantaneous state, from which unitary evolution continues undeterred, the quantum Zeno effect notwithstanding.) But in reality, there is elaborate and rich emergent structure in our trajectories and interactions, which, while is emergent from them, is not found per se in the formulation of the fundamental laws.
Concretely, what I am proposing is that the difference between the unperturbed, free-fall trajectory (the “prior trajectory”) of an object through spacetime and its actual path (the “posterior trajectory”) is meaning.
The logical implication here is that meaning is a force. It’s not a mechanical force with units of newtons, but a different kind of force, maybe with units along the lines of something like $\left[\textrm{energy}\right]\cdot\left[\textrm{time}\right]\cdot\left[\textrm{information}\right]^{-1}$ (“action per bit”).
To be clear, in this interpretation, meaning is an entirely emergent phenomenon; I am not saying that there is some fundamental force that actually “drags world lines,” but instead this is just how it appears to us given coarse-graining over the microscopic reasons for the observed behavior. (There is a similarity here to how Rovelli describes the related but distinct concept of agency as “the disregarding of physical links.”) Importantly, due to this coarse-graining, this formulation of meaning provides something like an expected value rather than a deterministic, mechanical implication.
This understanding of meaning is not specific to speech and applies to any interaction between agents. For example, murdering someone has a very large impact on that person’s future action: it sets it to zero! A large delta in action caused by a small amount of communication (in the broad, physical sense) should intuitively imply a large amount of meaning. Conversely, the difference between letting a pen fall to the ground versus catching it probably doesn’t lead to an interesting change in the long-term trajectories of any agents.
Of course, due to the world’s widespread chaos, the pen falling to the floor versus being caught might actually lead to a significantly altered trajectories.
Imagine that for every event an agent can assign a valence, a measure of how good or bad that event is, with neutral found at the zero point. The valence of something is not its meaning, but simply an agent-specific judgment about its desirability. We can integrate over a world line to calculate the net valence of a trajectory. Thus, if this theory ends up being useful, we should look to see that we can distinguish “chaotic impact” vs “meaningful impact” by the valence path integral over the difference between the posterior and prior trajectories of the former being zero or vanishing, and the latter being comparatively large and proportional to the conveyed meaning.
Though this formulation of meaning gives an important physical grounding, it is still kind of unsatisfying since it throws away most of the structure we think of when we say “meaning.” Shannon entropy does the same thing, of course, and it is still a very valuable concept since you can often go surprisingly far with things that just allow you to write down constraints like symmetries. But like entropy, it is probably a limited tool for when you’re content to throw away most of your priors.
This interpretation of meaning is compatible with Rovelli’s “Meaning = Information + Evolution”, but it’s broader in scope and doesn’t require an appeal to Darwinian evolution in particular, which feels somewhat arbitrary. (Surely systems that were not evolved, but otherwise meet all relevant to-be-understood criteria for agency, should be capable of recognizing meaning, no? Also see this commentary by Krzanowski.)
Developing a physical theory of meaning is valuable both to act as a form of symmetry breaking over arguments that suffer from false equivalence but may be otherwise hard to pin down, but also more essentially because the slipperiness of “meaning” has allowed it to occupy a particularly insidious place in lots of reasoning about intelligence, consciousness and agency. By shining a light on meaning, we can evaporate the magical thinking and allow further progress on a range derivative and related problems. There is a lot of work to develop such a theory, but it is valuable work, and hopefully some smart people will choose to pursue it.