Following my introduction to statistics over fifty years ago, I noticed that from time to time, my teachers seem to lose it, and us, and “go off with the fairies”. Those who insist on clarifying the distinction of my title hit this very early on. They want to introduce the familiar $s^2$, and they want to do it right. If the price to pay for this is that we must leave the world of rational thought, so be it, they reason. In her lovely 1940 paper on degrees of freedom (d.f.) cited in the excellent Wikipedia article on the same topic, Helen M Walker (1891–1983) wrote, “this concept often seems almost mystical, with no practical meaning.” Sadly familiar to so many of us.
Can we look to statistical theory to help in our explanation of the use of n−1? If we want to achieve unbiasedness—of our estimate of $σ^2$ but not of our estimate of $σ$ — then we can justify the n−1. That’s not too hard to explain, but is it worth the effort? If we are willing to introduce maximum likelihood estimation (under normality), we can justify the n, but that’s even more effort, and, I think, beyond my reader’s grand-daughter. We can even justify n+1 if we seek a minimum mean square error estimate of $σ^2$ (within a certain class). My conclusion is that at best, invoking theory leads to a draw between n and n−1. You pays yer money, and you takes yer choice.