Rick Durrett writes:
Picture this: it was the ides of March, a sunny day in Durham, the day before the most frequent Google search from the Duke campus was “Where is Lehigh University?” I was having lunch with a job candidate in computer science and explaining the difficulties of proving results about the evolving voter model (PNAS 2012, issue 10) when my young friend said “Why go to all of the trouble to prove theorems?” To this I gave the lame reply “In the math department I don’t get any points for doing things by simulation,” but later I realized he had a point. Why wrestle with details of the proof of the Riemann Hypothesis when numerical results suggest strongly that is correct. Once the first million zeros are in the right place, it is paranoia to think the result is wrong.
In 1960 Harris proved that the critical point for bond percolation on the two dimensional lattice was ≥ ½. Physicists immediately turned this into an =, a fact that Kesten proved 20 years later. Meanwhile physicists computed the values of a variety of critical exponents and gave their values as rational numbers. With the arrival of SLE, some of these exponents have been verified rigorously on the triangular lattice, a mere forty years later, but relying on results from the physics folklore to conclude that the exponents on the square and triangular lattices are the same.
The mention of SLE brings up another answer: sometimes the process of finding the proof leads to a very interesting and beautiful mathematical object. However, this is a question of taste, akin to: “Is Mussorgsky’s Pictures at an Exhibition better than Skid Row’s iconic album Slave to the Grind?”
One of the reasons for proving results is that physicists sometimes get the answer wrong, but such examples are few and far between. At one point numerical experiments for bootstrap percolation in d=3 seemed to indicate that it had a positive critical value, because the wrong functional form was used to extrapolate to the limit. However these situations in the physics literature usually correct themselves quickly. While I was at UCLA, Provost Ohrbach and a friend made the Alexander-Ohrbach conjecture, a new relationship between critical exponents. A short time later in Volume 30 issue 7 of Physical Review B, four articles disproving the conjecture were published.
While physicists may be reliable within their own field, their calculations are less trustworthy in the parts of biology that they have invaded. My favorite recent example is the work of Martens and Hallatschek published in Genetics 189, 1045–1060 and the companion paper with Kostadinov and Maley in the New Journal of Physics 13, paper 115014. This paper attracted my attention because it studies accumulation of mutations in a spatial Moran model. Their main result is that the rate of adaptation saturates when the linear habitat size exceeds a characteristic length scale Lc. However, much of the analysis in the paper is flawed. They compute the speed of advance of an advantageous mutation by using a result for Fisher’s partial differential equation without realizing that this is terribly wrong in d=1 and is missing a factor of log(1/s) in d=2. They derive formulas for the “fixation time” which is never defined precisely. On page 1056, they have a heuristic argument why this fixation time is of order (L/Lc)3/2 when L is much larger than Lc. One might naively think that boundaries between the clones do coalescing random walks, but the random fitness advance destroys this picture. The two physicists explain the scaling by the remark that the boundaries are in the Kardar-Parisi-Zhang universality class. I guess the proof wouldn’t fit in the margin but they could at least have followed in Elaine Benes’ footsteps and said yada yada polynuclear growth. I won’t go on about the many errors in these papers: this is not the time or place. Besides, it was nice of them to give me something to work on. It is less nice of them to fail to give appropriate references to the literature. Now I wouldn’t expect them to cite the work of Bramson and Griffeath from the early 1980s (which has been cited 57 times) but the paper by Williams and Bjerknes that inspired them has been cited 118 times. It is hardly accurate to leave this out and cite unimportant papers from the physics literature rather than more interesting work that Komorova did at about the same time.
In the age of computers this lack of scholarship is inexcusable: all you have to do is to type something like “stochastic spatial cancer model” into Google to find earlier references. Perhaps the authors were following Feynman’s motto: “If all of mathematics disappeared, it would set physics back one week.”
Returning to my theme: Why do we prove theorems? We do it to make sure the results are right, and in order to avoid polluting the literature with half-truths. In addition, the construction of a proof forces us to identify ALL of the mechanisms at work in the problem. IMHO that provides more insight than refining your heuristic arguments until they fail to be contradicted by the results of simulation.
Finally, while the questions of Fermat and Riemann are true–false, in some cases (e.g., a central limit theorem) results will hold only under some conditions which proofs will identify. In the words of a colleague: “Some of the conditions have to do with the state of nature (e.g., do the data have heavy tails, are they dependent, etc.) but some have to do with choices made in the analysis: an asymptotic result in nonparametric function estimation will only be true under some conditions on the bandwidth that is the statistician’s choice—so the theorem guides the practitioner!”
3 comments on “Rick’s Ramblings: Why Prove Theorems?”