We introduce a new section in which IMS members are invited to propose new research ideas or directions. These do not need to be formally/provably absolutely new, but it’s an opportunity to emphasize the benefit of an idea for the research community. The purpose is twofold: to gauge the research community’s interest before investing more time and effort into these ideas; and to find collaborators to tackle these new ideas, if other people become interested and come up with related ideas. We encourage interested readers to respond to these ideas with critical comments and/or suggestions, and to write in and share your own ideas. 

Starting the series is Alexander Y. Mitrophanov. Alex is a Senior Statistician at the Frederick National Laboratory for Cancer Research, National Institutes of Health, USA. His email is alex.mitrophanov@nih.gov. 

 

Quantitative Perturbation Theory for Stochastic Processes

Perturbation theory for Markov chains, centered largely on the analysis of perturbation bounds and asymptotic expansions, is a rapidly developing research direction in probability theory and its applications [references 1-3]. While the conceptual foundation for this line of research was laid in the 1960s, it is only in the last two decades that it has attained real visibility and recognition in the broader community of quantitative researchers. Many of the cutting-edge theoretical developments in this area are motivated by the need to develop and assess approximation-based approaches to Markov chain Monte Carlo (MCMC) computation, an essential tool in Bayesian statistics. Beyond computational statistics, Markov-chain perturbation results and their generalizations are applied in fields as diverse as machine learning, queuing theory, stochastic chemical kinetics, genetics, quantum physics, and climate science.

When I started doing research in that area some 20 years ago (e.g., [4]), I could not imagine this diversity of possible applications, and my knowledge of MCMC was at the level of a self-taught graduate student. I knew something about stochastic chemical kinetics, but most of my inspiration came from some general ideas about the development of mathematics. Having been exposed to a wide variety of mathematical subjects (and being preoccupied with choosing a topic for my PhD thesis), I formulated—just for my own self-guidance—a completely non-rigorous, naïve, yet fruitful concept: “Every type of mathematical object needs a quantitative perturbation theory. If that theory does not exist, it should be developed. If it does exist, it should be improved.” I was particularly fond of the notion that a perturbation bound could allow one to quantify the size of the perturbation in some equation’s solution given the magnitude of perturbations in the equation’s parameters, and such bounds might even tell us something valuable about the properties of the equation itself. I found powerful examples of the inequality-based approach in numerical linear algebra—the widely used matrix condition numbers and all that. And in fact, perturbation bounds lie at the very foundation of modern mathematics: the ubiquitous Lipschitz and Hölder continuity conditions can in principle be viewed as sort of uniform perturbation bounds for functions studied in real analysis and approximation theory. 

Going back to the world of probabilities, we note that many of its results exist in the form of limit theorems. Such theorems can be given a precise quantitative meaning if we use perturbation theory to replace the limit statement (or an asymptotic rate-of-convergence formula) with a tight—and, preferably, explicitly computable—nearness bound. This use of inequality-based perturbation theory complements its more obvious use to facilitate uncertainty quantification (which, taken generally, is a vast research field of its own that has exploded in the recent decade). 

Going beyond Markov chains, a natural next step is to extend the Markov-chain perturbation results to other classes of stochastic processes. One remarkable finding is the quantitative connection between the robustness of a Markov chain under perturbations and its rate of convergence to steady state. (A brief survey of related stability results, originally targeted at mathematical physicists, is available online: http://alexmitr.com/talk_DDE2018_Mitrophanov_FIN_post_sm.pdf) 

It would be interesting to find out to what extent this, or similar, type of connection holds for other classes of ergodic stochastic processes. Naturally, for different classes of non-Markov processes, we are also interested in perturbation results that are unique to their specific class. 

At a “more quantitative” level, the race is always on to improve the tightness of perturbation bounds (even in the now-classic domain of Markov chains, but with a particular emphasis on infinite state spaces). Furthermore, for a complete localization of perturbed solutions, we need both upper and lower bounds on the perturbation magnitude. However, all the Markov-chain perturbation bounds I know are in fact upper bounds, and obtaining informative lower bounds might require completely new ideas and techniques. Finally, we can expect that new and exciting applications will continue to drive theoretical developments in perturbation theory and the related fields of uncertainty quantification and sensitivity analysis. At the same time, it might also happen that, if we take care of the theory, applications will just take care of themselves. 

 

References

1. Negrea J., Rosenthal J. S. (2021) Approximations of geometrically ergodic reversible Markov chains. Adv Appl Probab 53: 981-1022.

2. Levi E., Craiu R. V. (2022) Finding our way in the dark: approximate MCMC for approximate Bayesian methods. Bayesian Analysis 17: 193-221.

3. Medina-Aguayo F., Rudolf D., Schweizer N. (2020) Perturbation bounds for Monte Carlo within Metropolis via restricted approximations.
Stoch Proc Appl 130: 2200-2227.

4. Mitrophanov A. Y. (2003) Stability and exponential convergence of continuous-time Markov chains. J Appl Probab 40: 970-979.

 

 

Do you have constructively critical comments or suggestions about this research area? Respond to Alex directly (alex.mitrophanov@nih.gov) or write via the Bulletin (bulletin@imstat.org).

If you’ve been wondering, “Is anyone else interested in finding out…?” this is your chance to ask! Consider this YOUR Invitation to Research.