About the whole inherent tradeoffs thing. This is a useful organizing principle that applies across a wide swath of domains. To show that I’m not just jawboning, let me offer some specific examples. And not just fancifully oblique little second-person missives, but familiar examples from mathematics and the natural sciences presented in a systematic way. Specifically, I claim that for every inherent tradeoff I can identify three properties: a domain in which it appears, a specific tradeoff which can always be expressed as one thing versus something else, and a deeper principle upon which the tradeoff hinges. These can be presented in handy tabular format after which a discussion may follow. For example:
|Tradeoff||Brightness vs. Sharpness|
On an aesthetic level, an image’s brightness and blurriness are independent qualities. It may be desirable to have a bright and sharp image, or a dark and sharp one, or a bright and blurry one, or any degree and combination thereof. However, in the physical world these properties are not independent because both are functions of how long you leave the shutter open. A longer exposure time means more light hitting the film, but it also means more opportunity for the camera to jitter. Aesthetics must bow to physics.
That’s the idea. Now, in no particular order, here are other examples.
|Tradeoff||Precision vs. Recall|
|Deeper Principle||Finite resources for examining results|
Ideally you’d comb through every possibly relevant hit returned by a database query, but there are only so many hours in the day. So it’s up to you how to spend your time. Is accuracy more important, or coverage? Unlike the brightness/blurriness tradeoff, whose non-orthogonality is an undesirable consequence of the physics of picture-taking, precision vs. recall is an invention of computer scientists, an attempt to make a finite resources problem more manageable by dichotomizing it.
|Tradeoff||Speed vs. Space|
|Deeper Principle||Electrons are much smaller than molecules|
This comes up all the time. You have a choice between two algorithms that achieve the same result. One of them calculates intermediate results every time they are needed. The other calculates the same results once, then stores them in random access memory for quick subsequent retrieval. The weasel word there is the “quick” part. As soon as you overflow the available memory (and you always will) the intermediate results get written out to a more permanent form of storage. As of the time of this writing, that more permanent form consists of magnetized sections of a spinning disk. Thirty years ago it was it was magnetized sections of reel-to-reel tape, and five years from now it may be some other physical medium. Crucially though, the random access memory involves marshaling electrons, which can transmit information at the speed of light, but are tiny and damnably hard to corral, while the persistent memory involves rearranging the properties of big, baby-Huey molecules, which have the stolidity and heft of Easter Island statues. A whispered aside or a message chiseled in stone–you decide, depending on what you need.
|Tradeoff||Temporal vs. Spectral resolution|
|Deeper Principle||Repetition takes time|
Given a signal, you can take its Fourier transform to determine what frequency components are contributing to it. What you’d like is to calculate the frequency components at each infinitesimal time instant, but that’s not possible. Instead you have to pick a time window and calculate the frequency components contributing to the signal within that window. The bigger the window, the more fine-grained the spectral representation. Choose too short a window and it’s all just mush. This makes sense. If you want to know the contribution of a sine wave that repeats every five milliseconds, you’re going to have to wait for more than five milliseconds. Be patient.
|Domain||The Heisenberg Uncertainty Principle|
|Tradeoff||Momentum vs. Position|
|Deeper Principle||Honestly, I don’t know|
First, a pet peeve: the Heisenberg Uncertainty Principle does not state that the act of observing an event can change it. Anyone who watches reality television can tell you that. What the uncertainty principle claims is that there are certain pairs of incommensurable physical quantities–momentum and position being the classic example–that cannot be measured for the same entity at the same level of accuracy. You can know the position of an electron to any degree of precision you desire, but only at the cost of uncertainty about how fast it’s moving. And vice versa. It’s pretty much exactly the same situation as blurriness vs. brightness in a photograph. The reason why is that if you solve the Schrödinger equation, the expression for momentum is the Fourier transform of the expression for position. What this means on a deeper level, I have no idea.