Bulletin Editor Anirban DasGupta sets this problem. Student members of the IMS are invited to submit solutions (to bulletin@imstat.org with subject “Student Puzzle Corner”). The deadline is January 15, 2016.
It is the turn of a statistics problem this time. Abraham Wald literally opened up a major new framework for thinking about and doing statistical inference by proposing that methods (procedures) of inference be evaluated in terms of their risk functions, defined as the expected value of the incurred loss. Lower the risk, better the procedure. Two difficulties many practicing statisticians face in following Wald’s formulation of inference are the need for a specific loss function, and that risk functions of intuitively reasonable procedures usually cross. Thus, universal risk optimality is not a practical possibility. The two most common optimality criteria for getting around the conundrum of crossing risk functions are the Bayes and the minimax criteria. Although minimax procedures are generally interpretable Bayesian-ly, the principle of minimaxity does not require user specification of a prior distribution, only a loss function, and this some (many?) find attractive. Having said that, exact evaluation of a minimax procedure is usually difficult; there would have to be a happy confluence of a friendly loss function and a great deal of mathematical structure in the probabilistic model for one to be able to exactly find a minimax procedure. Interest is more in the risk of the minimax procedure to guard yourself against being too gutsy, rather than in the minimax procedure itself. Even when we can find the minimax procedure exactly, it may be sufficiently odd or strange that you would not want to actually use it: a famous example is the unique minimax estimator of the binomial p parameter for squared error loss. You would rather use the MLE.
Minimaxity is not just alive as a research theme; it is definitely kicking and very well. It is here to stay as a trusted benchmark, a guardian of sense and sensibility. If you find minimaxity to be too timid as a criterion, it has been diluted by looking at penalized or restricted minimax procedures. But most of all, a lot of us do much of our research motivated by fun and curiosity, and minimaxity will give you a handful and more! We will consider a simply stated minimaxity problem in this issue. We can make it a lot harder! But that can wait for another day.
Before stating the problem, here are thirty references on minimaxity, arranged chronologically. If you do not like my choice of thirty references, surely I understand. I would expect disagreement on which thirty to cite: Hodges and Lehmann 1952, Efron and Morris 1976, Haff 1977, Bickel 1980, Pinsker 1980, Stein 1981, Ibragimov and Hasminskii 1981, Berger 1982, Assouad 1983, Speckman 1985, Rubin and DasGupta 1986, Ermakov 1990, Heckman and Woodroofe 1991, Lepskii 1991, Groeneboom and Wellner 1992, Fan 1993, Birgé and Massart 1995, Donoho and Johnstone 1996, Brown 1998, Cai 1999, Yang and Barron 1999, Devroye and Lugosi 2000, Nemirovski 2000, Strawderman 2000, Vidakovic 2000, Brown 2002, Kerkyacharian and Picard 2002, Tsybakov 2009, van de Geer 2009 (the paperback edition costs less), Korostelev and Korostelev 2011.
And now, here is this issue’s exact problem:
Let $n \geq1$ and $X_1, \cdots , X_n \stackrel{iid} {\sim } N(\mu , 1)$, where the mean $\mu $ is unknown and $-\infty < \mu < \infty $ is the parameter space. Suppose we wish to estimate $m = |\mu |$ using squared error loss function $(a-m)^2$ and an action space equal to the whole real line. Find explicitly a minimax estimator $\delta (X_1, \cdots , X_n)$ of $m$ and its maximum risk $\sup_{\mu }\,E_\mu[\delta (X_1, \cdots , X_n)-m]^2$, and minimum risk $\inf_{\mu }\,E_\mu[\delta (X_1, \cdots , X_n)-m]^2$. As a hint, take the estimator that makes intuitive sense seriously, to solve this problem.
1 comment on “Student Puzzle Corner 12 (deadline January 15, 2016)”