Regina Liu is a Distinguished Professor in the Department of Statistics at Rutgers, The State University of New Jersey. She received her PhD in statistics from Columbia University. Regina has made seminal contributions to several research areas. For example, she is credited for developing the critical foundation for the now vibrant field of data depth. She also contributed several fundamental works to resampling, including the now widely used moving-blocks bootstrap and moving-blocks jackknife methods for resampling dependent data, and other bootstrap methods for non-i.i.d. models. More recently, with her colleague Minge Xie and other collaborators, she has been pursuing a research program of distributional inference using the concept of “confidence distributions”, to show confidence distribution to be an all-purpose powerful analysis tool for: quantifying uncertainty, combining inferences from different paradigms or from diverse data sources, and potentially providing a unifying framework for direct connection and comparison of the existing different statistical paradigms. Regina is an elected fellow of the ASA and the IMS, and she was President of the IMS in 2020–21. Among other distinctions, she is the recipient of the 2011 Stieltjes Professorship from the Thomas Stieltjes Institute for Mathematics in The Netherlands, and the 2021 ASA Noether Distinguished Scholar Award, and she gave the 2024 COPSS E.L. Scott Award Lecture at last year’s JSM.

This 2025 IMS Neyman Lecture will be given at JSM Nashville (ww2.amstat.org/meetings/jsm/2025/index.cfm), on Monday, August 4, at 2:00pm.

 

Fusion Learning: Combining Complex Inferences from Diverse Data Sources

Modern data acquisition technology has greatly increased the accessibility of complex inferences, based on summary statistics or sample data, from diverse data sources. Fusion learning refers to combining complex inferences from multiple sources or studies to make a more effective overall inference for the target parameters. We focus on the tasks: (1) Whether/when to combine inferences; (2) How to combine inferences efficiently; and (3) How to combine inferences to enhance an individual study, thus named i-Fusion.

We present a general framework for nonparametric and efficient fusion learning for inference on multi-parameters, which may be correlated. The main tool underlying this framework is the new notion of depth confidence distribution (depth-CD), which is developed by combining data depth, bootstrap and confidence distributions. We show that a depth-CD is an omnibus form of confidence regions, whose contours of level sets shrink toward the true parameter value, and thus is an all-encompassing inferential tool.

The fusion approach is shown to be efficient, general and robust. It readily applies to heterogeneous studies with a broad range of complex and irregular settings. This property also enables the approach to utilize indirect evidence from incomplete studies to gain the hidden efficiency for the overall inference.

The approach is demonstrated with simulation studies and real applications in tracking aircraft landing performance and in zero-event studies in clinical trials with non-estimable parameters.