During her 2019 term as ASA President, Karen Kafadar (who is now Editor-in-Chief of the Annals of Applied Statistics) convened a Task Force to address issues surrounding the use of p-values and statistical significance, as well as their connection to replicability. The report from the task force and Karen’s Editorial will be published in the September 2021 issue of the Annals of Applied Statistics: see the first two items currently listed at https://imstat.org/journals-and-publications/annals-of-applied-statistics/annals-of-applied-statistics-next-issues/ . Karen explains the background to these articles:


The debate about the value of hypothesis testing, and the over-reliance on p-values as a cornerstone of statistical methodology, started well over a century ago, and it continues today. Statisticians and researchers have commented on their use… and their abuse. In March 2019, The American Statistician devoted a special issue to this topic; the opening Editorial stated, “It is time to stop using the term ‘statistically significant’ entirely. Nor should variants such as ‘significantly different,’ ‘p < 0.05,’ and ‘nonsignificant’ survive.” The Editorial was co-authored by the Executive Director of the American Statistical Association (ASA) without a disclaimer and was distributed widely to other journals; its authors subsequently made numerous presentations, and many presumed the Editorial represented ASA policy. The reactions ranged from surprise to confusion to complete distrust of all statistical methods: if hypothesis tests are not valid, why should anyone trust statisticians?

Nowhere does the 19-page Editorial imply that hypothesis tests and p-values are invalid. But the perception has persisted: recently, a legal scholar from the Federal Judicial Center asked three statisticians, “Can you tell us what to do now that ASA has stated that we’re not supposed to rely on significance tests and p-values to evaluate scientific evidence in legal cases?”

The reputation of our profession, and decades of dedicated efforts to ensure mathematically sound principles of statistical practice, now seemed at stake. As ASA President in 2019, I did not know how to respond to these researchers, except to remind them that the ASA does not endorse any article, written by any author, in any journal—even those that the ASA itself publishes. (Neither does the IMS, or any professional society.) As the questions kept coming, I decided that the best response could come from luminaries in our profession whose credentials are above criticism. And so the Task Force was formed.

Its co-chairs (Xuming He, Linda J. Young) and I (ex-officio) were joined by twelve highly-respected statisticians with much experience in both theoretical foundations and applications across multiple fields. The Task Force included former, current, or future Presidents from IMS (three), ISI (two), and ASA (three); nine former or current journal editors; and 15 former or current Associate Editors of statistical and scientific journals. With the expressed intention to keep the Task Force Statement short (so people would read it), members worked diligently and considered carefully every single word in it—often dismissing multiple synonyms that, in their collective view, did not express exactly what the statement should say. It is hard for me to overstate my heartfelt thanks to all of them, for the hours of video conference calls and hundreds of emails that we exchanged. (One member counted 45 messages about one word alone.) All recognized the importance of valid statistical methods in scientific research and the role that such a statement could play to assure its place in science.

Our profession experienced an unexpected vote of confidence during the pandemic year of 2020, as researchers turned to us for modeling the spread of Covid-19, for legitimately questioning so-called “research” on the use of hydroxychloroquine to suppress the SARS-CoV-2 virus, for design of clinical trials to test the efficacies of vaccines, and for evaluation of vaccine safety before release. Speaking about the latter on the PBS NewsHour on 23 November 2020, Dr. Anthony Fauci assured the public that: 

“The decision of whether or not a vaccine is safe and effective, that is made by a completely independent group, not by the federal government, not by the company. It’s made by an independent group of scientists, vaccinologists, ethicists, statisticians. They do that independently.”
(My emphasis: see https://www.pbs.org/newshour/show/fauci-thanksgiving-gatherings-will-put-families-at-risk)

The Task Force Statement does not aim to discourage development of new methodologies, but we hope that our well-researched and theoretically sound statistical methodology is neither abused nor dismissed categorically. I hope that the Task Force Statement will successfully communicate the importance of statistical inference and the proper interpretation of p-values to our scientific partners and science journal editors in a way they will understand and appreciate, and can use with confidence and comfort. My thanks to the IMS for publicizing the Task Force Statement, and especially to the Task Force members. I learned much from all of them during our meetings, and I can only hope that they enjoyed the experience in preparing the Statement as much as I did.