In the last issue, Terry Speed wrote about n vs n−1. So what links that topic to Walter Shewhart and control charts? 0.886!
Sometimes mysterious forces seem to enter my life. In the last issue I wrote about n vs n−1, and noted in passing that while using $s^2$ with divisor n−1 gives us an unbiased estimate of $σ^2$, its square root $s$ leads to a biased estimate of $σ$. I didn’t know then of the control chart users’ interest in this issue. Later, an email from a reader led me to think about Walter A. Shewhart (1891–1967), our institute’s second president, and a statistician I have long admired. His two books Economic Control of Quality of Manufactured Product (1931) and Statistical Method from the Viewpoint of Quality Control (1939) are arguably the most original and enthralling books about statistics ever written. Both remain readily available and I commend them to you.
Shewhart is of course famous for having invented control charts (for which the estimation of $σ$ is central), but reading these books will show you how much more he contributed. He is one of the few writers who pays very close attention to the gap between mathematical statistical theory and the real world. Also, he was a deep thinker. One of my favourite paragraphs in the 1931 book is this: “Perhaps of even greater interest, however, is the consideration of what we mean by judgment and common sense—two things which we find we must use so often in experimental work of all kinds.” As you read it, you can hear him thinking, “If only we could define and teach these qualities!”
While I was overcoming my inability to write this column, I began to read a paper discussing the reproducibility of sequence-based gene expression measurements from single cells. In the methods section I came across the following: “We calculated absolute difference in log10 expression values and s.d. by multiplying mean variation in a bin with 0.886.” This didn’t convey to me exactly what they had done to estimate $σ$, so I did what any modern person would do: I entered the terms “0.886 standard deviation” into Google and hit return. The top hit was an article about the use of control charts in the production of concrete, and in it I saw the usual (biased) definition of sample standard deviation $s$, followed by the statement, “Standard deviation = 0.886 × mean range of successive pairs of results.” There was no more explanation, so I went on to look at the second top hit. There I saw a discussion of the control chart constant $c_4$, and the averaging of estimates of standard deviations based on $n$=3 observations. All of this looked very promising, as I could understand the reasoning, but there was just one catch: whereas my first hit used 0.886 to multiply something to get an estimate of $σ$, the second hit used the factor 0.886 to divide something to get an unbiased estimate of $σ$.
My conclusion so far: the topic of last issue’s column, my Shewhart prompt, and the paper I am reading on gene expression, have all converged to 0.886—in more than one way.
The full story doesn’t take long to explain. Shewhart (who, along with Deming, always used the divisor $n$ in his $s^2$) introduced the numbers $c_2$ = $c_2$($n$) in his 1931 book to de-bias the estimates $s$ of $σ$ for samples of size $n$. Either Shewhart or one of his successors later introduced $c_4$ = $c_4$($n$) to do the same thing when the divisor in $s^2$ is $n$−1. The reason for their wanting to do so was simple and compelling: to use unweighted averages to combine independent estimates of $σ$ based on small sample $s$’s, arguably one of the few contexts where unbiasedness really matters. And $c_4$(3)=0.886.
Independently, during World War II, someone at the Marconi-Osram Valve Company in the UK (and probably elsewhere) devised the estimator of $σ$ presented in my first Google hit, based on averaging successive pair differences |xi – xi−1|. It was used in control chart work at that time, and remains important in XmR control charts. You can suppose that x1 and x2 are iid $N(μ, σ^2)$ to see where 0.886 comes from.
Two decades after this, as well as a century earlier, people discussed an estimator of $σ$ based on the average of all (not just successive) pairwise differences $|x_i – x_j|$. This is usually called Gini’s mean difference. In the nineteenth and early twentieth century, it was probably just a curiosity, but in the 1960s it was seen as an estimator of $σ$ with a small loss of efficiency under normality, having some robustness against outliers.
I like the way 0.886 links different aspects of our field, and that some biologists brought Gini’s mean difference to my attention as a robust estimator of $σ$. Long live $½ \sqrtπ$!
Oh dear. What would Walter Shewhart, our institute’s second President
and the inventor of the control chart, think?
Comments on “Terence’s Stuff: 0.886 And All That”