The laureates of the first Rousseeuw Prize for Statistics received their awards in a ceremony at the University of Leuven, Belgium, on October 12th: you can see the photos, videos and slides at https://rousseeuwprize.org/ceremony. Our columnist **David J. Hand**, Imperial College London, chaired the international selection committee, and gave the following speech at the ceremony, before the prizes were presented by His Majesty King Philippe of Belgium. He writes:

*The biennial Rousseeuw Prize for Statistics, first awarded in 2022, is aimed at recognising pioneering work in statistical methodology, focusing on the innovation rather than the individual and taking note of its impact on statistical research and practice, and on society. Hopefully this award, worth 1 million Euros, will achieve the same status as has the Nobel Prize in other disciplines. Our heartfelt thanks is due to Peter Rousseeuw for sponsoring the prize. Peter put in a huge amount of work to establish it and, as a major innovator in the development of the modern discipline of statistics, it is entirely appropriate that it should be named after him.*

*I had the honour of chairing the International Selection Committee, and I described the prize-winning work and introduced the winners at the award ceremony on 12 October, where King Philippe of Belgium awarded the prize. The following is a reduced version of my presentation (which can be viewed at *https://rousseeuwprize.org/static/video/20221012_RPrize_Hand.mp4 )

To introduce the award-winning work, it is necessary for me to describe some background and context.

We begin with the observation that causes produce effects. But working out just *what causes what* is often far from straightforward, and often the superficially obvious is misleading. The proportion of people with lung cancer is higher amongst smokers than non-smokers, so it looks as if smoking causes cancer. But could it be that those people who are likely to get cancer are also those who are more likely to smoke? This was a key question in the mid-twentieth century when smoking was widespread. Just because two things are correlated does not mean that one causes the other. As we statisticians say: correlation does not imply causation.

And what about the factory workers who died young. Was it smoking that killed them, or the toxic chemicals they were using? How can we attribute cause correctly? The factory owners need to know.

If people who were made ill by a medicine stopped taking it and did not bother to return for follow-up, then only those who improved would be recorded in the data. A simple analysis would then show that apparently most people benefitted. This could be the opposite of the truth. If a vaccine is effective, there will be little of the disease in the community, so people will not appreciate how important it is to get vaccinated, and the vaccination rate will fall, leading to an increase in illness. Regrettably, we see this feedback mechanism in real life.

How can we disentangle such causal networks? How can we decide how we should intervene to produce a desired effect? Clearly this is not simply a medical question. Industrialists want to know how to increase their company’s profits; athletes need to know what will lead to improved performance, and so on.

Answering such questions involves exploring what would happen *if *we did something, or *if* the circumstances were such-and-such. The result of “What happens *if*” is a *potential outcome*. So explicating causal models involves estimating and comparing potential outcomes: what would happen if we did A versus what would happen if we did B. Or, if we actually had done A, we’d like to know what would have happened had we done B. The outcome under B is then called a *counterfactual*. Since we did A, it’s *contrary* to the fact.

To begin to tease out causal paths, we start with a comparative trial: we give treatment A to one group of patients and treatment B to another group and we see which group does better. But there’s a problem. Perhaps the group receiving A are sicker or younger than the group receiving B. Perhaps the difference in outcome we observe between the two groups is not due to the treatment, but to how ill they were to start with or how old they were. Degree of illness and age here are called *confounding *variables. They induce a correlation between the treatment and outcome which is not due to the first causing the second.

To overcome that, the *randomised *controlled trial [RCT] is used. In these, the patients are *randomly *allocated to the two treatments. The randomness means that, on average, any differences between the outcomes of the two groups must be due to the treatment difference and nothing else.

The *randomised *controlled trial is a key tool in exploring causal relationships. It’s been described as the gold standard, and as one of the greatest advances in medical research.

Unfortunately, however, too often it’s not possible to make random assignments. Ethical considerations often intrude. We can hardly randomly allocate people to smoking and non-smoking groups to see if smoking causes cancer.

And things are yet further complicated by the fact that in clinical practice people are often given several treatments. You start with a small dose of a drug. It works for some patients, but not for others. So some are given an increased dose. Some of those drop out because of side effects. Some others are cured. For those who are not, you try switching to a different drug. And so on. And perhaps the effectiveness of the drug depends on other factors which themselves are influenced by earlier treatments. And, to cap it all, different clinicians make different decisions. Data arising informally like this, and not from some carefully controlled experimental manipulation like a randomised controlled trial are called *observational* data.

It’s clear that, if different treatments are offered at different times, and if the choice of treatment is influenced by how well patients responded to previous treatments, and if different clinicians might take different patient characteristics into account, things rapidly get quite complicated, and simple analyses can be very misleading.

Fortunately for all of us, the Rousseeuw Prize winners have developed statistical theory and methods for disentangling cause-and-effect from observational data.

In particular, in 1986 **James (Jamie) Robins** wrote a 121-page paper describing an approach to estimating the causal effects of treatments with time-varying regimes, direct and indirect effects, and feedback of one cause on another, from observational data.

This work was seminal, and Robins’s so-called “g-formula” turned out to be a key for tackling such tangled causal webs. The “g” refers to *generalised* treatment regimes—including dynamic regimes, in which a later treatment depends on the response to an earlier treatment.

But that was just the beginning. It prompted several decades of intensive and focused research. Later papers by Robins and the other laureates honoured today built on the ideas described in the 1986 paper and developed methods for more general situations, methods to reduce or eliminate bias, as well as methods for devising optimal treatment regimes, and other cases.

Since Jamie Robins played the key role in initiating these advances, 50% of the prize is awarded to him, with the other 50% divided equally between the other laureates.

Professor James Robins is Mitchell L. and Robin LaFoley Dong Professor of Epidemiology at the Harvard T.H. Chan School of Public Health. He is an ISI Highly Cited researcher, his work having been cited over 80,000 times. He attended Harvard College, focusing on studies in mathematics and philosophy, and moved to Washington University in St Louis to study medicine, graduating in 1976. He practiced occupational medicine at Yale for some years, where he co-founded an occupational health clinic. There he found himself regularly being asked whether it was “more probable than not that a worker’s death or illness was *caused* by exposure to chemicals in the workplace.” To find a way to answer this question, he began to study biostatistics and epidemiology. But he found that the only tool they described for answering such questions was an RCT. Since, as we have seen, one can hardly randomly assign people to a toxic chemical, the statistical tools of the time were unable to provide an answer.

So Robins turned his attention to finding ways to tackle this fundamental question. He observed that epidemiologists had informal rules for handling such things as confounding and bias. And he translated them into formal statistical structures. “Mathematicising” ideas means they can be clearly manipulated and any assumptions laid bare. And in 1986 he published the paper I have already mentioned, a paper that has been described as “revolutionary”.

Of course, that initial 1986 paper did not contain all the answers. Robins continued his work, sometimes having to battle to get it published. Deeper investigation led to more sophisticated variants of the ideas, each designed to answer questions that earlier methods could not resolve. These deeper studies have been carried out in collaboration with co-investigators, central amongst whom are the other four laureates who are being honoured. These are Miguel Hernán, Thomas Richardson, Andrea Rotnitzky, and Eric Tchetgen Tchetgen. I’m going introduce them and say a little about their work to give the flavour of it. However, as will be obvious, I cannot really do justice to the extent of their contribution in a few words.

**Miguel Hernán**** **is Kolokotrones Professor of Biostatistics and Epidemiology at the Harvard T.H. Chan School of Public Health. He graduated in medicine from the Autonomous University of Madrid, and received Masters degrees in quantitative methods and in biostatistics and his PhD from Harvard University.

Hernán has developed a perspective which sees observational studies of a time-varying treatment as a nested sequence of individual RCTs run by nature. His work with James Robins has been applied in deciding when to initiate combined antiretroviral therapy to reduce mortality and AIDS-defining illness in HIV-positive people; it’s been applied to explore the effectiveness of Covid-19 vaccines over time, and in subpopulations; and it’s been applied in many other areas. But I hope those examples will drive home the truth that the work is not simply of theoretical academic interest: it has major practical consequences in terms of improving people’s lives, and even saving lives. Hernán co-authored, with James Robins, the book “*Causal Inference: What if*”, which I thoroughly recommend.

**Thomas Richardson** is Professor in the Department of Statistics at the University of Washington in Seattle. He received his BA in Mathematics and Philosophy from Oxford University and his MS and PhD in Logic, Computation, and Methodology from Carnegie Mellon.

Given the fundamental importance of causal modelling, of determining what causes what, and what you have to change to produce a desired effect, it is perhaps not altogether surprising that there is more than one way of looking at things. The ideas developed by James Robins and his collaborators are widely used in statistics, biostatistics, epidemiology, and economics. But in computer science, sociology, and philosophy a different conceptual perspective has been adopted. This is based on causal graphs—so-called “directed acyclic graphs” or DAGs. Since these two perspectives describe the same world, it ought to be possible to map one to the other. And there are potentially great advantages in doing so. In general in science, if you can look at things in different ways it can lead to greater insights, greater understanding, and, as in the present case, greater potential for intervening for good. And Thomas Richardson and James Robins solved this translation problem through the development of their “single world intervention graphs” or SWIGs. Incidentally, DAGs and SWIGs are just the start—there are a lot of acronyms in this world, describing highly sophisticated statistical ideas.

**Andrea Rotnitzky** is Professor of Statistics at the Universidad di Tella in Buenos Aires. She obtained her Licentiate in Mathematics from the University of Buenos Aires, and her PhD in Statistics from the University of California at Berkeley.

With James Robins, she developed so-called “doubly-protected” or “doubly-robust estimators”, which are widely used by epidemiologists, economists, and computer scientists, as well as by data-driven companies such as Google, Amazon, and Facebook. Other problems she has studied illustrate the complexity and sophistication of work in this area. They include exploring verification bias, tackling intermittent non-response, coping with responses exhibiting non-compliance, and with missing data. In fact, one way of looking at causal modelling is that it is the flip side of missing data models: we would like to compare the effect of the treatment we gave with what would have been the effect of the treatment we didn’t give—but by definition, the effect of the treatment we didn’t give is unobserved: it’s missing data.

**Eric Tchetgen Tchetgen** is the Luddy Family President’s Distinguished Professor and Professor of Statistics and Data Science at the Wharton School of the University of Pennsylvania. He received his BS from Yale University and his PhD from Harvard.

Unfortunately, under some circumstances, even doubly-protected estimators may have a relatively large bias—meaning that, on average, they might yield estimates which depart from the truth. Eric Tchetgen Tchetgen and James Robins extended things yet further, developing theory based on so-called “U-statistics”. Furthermore, most work on causal modelling assumes that the outcome for one person depends only on the treatment that person received, and not also on the treatments others received. This is called “non-interference”. Sometimes, however, that assumption is unrealistic and cannot be made. For example, if a vaccinated person cannot infect others, then that person’s treatment influences the outcome for others. In such situations, causal inference is particularly complicated. Tchetgen Tchetgen has explored this complication and developed methods for coping with it.

I’d like to conclude with the observation that James Robins and his co-workers have elevated our understanding of causal modelling to new levels. By providing us with tools to understand causal relationships they have materially enhanced the human condition: in medicine, in science, in economics, in business and industry, in government—in fact, in all domains in which causal questions arise. Which is just about everywhere.

Details of the Rousseeuw Prize, with videos of the award ceremony on October 12, can be seen at https://www.rousseeuwprize.org/.