Regina Nuzzo, one of our new Contributing Editors, has a PhD in statistics and is also a graduate of the Science Communication program at University of California–Santa Cruz. Her work as a part-time freelancer over the past 12 years has appeared in Nature, New Scientist, Scientific American, Reader’s Digest, the New York Times and the Los Angeles Times, among others. In 2014 she received the American Statistical Association Excellence in Statistical Reporting Award for her Nature feature on p-values.

Her geeky Latin friends tell her that a rough translation of the name of this column, Regina Explicat, is, “The queen disentangles.” She explains why below.

A couple of years ago when I was at a conference at Stanford I spotted a fellow science journalist—Christie Aschwanden, a writer for the digital magazine FiveThirtyEight—pulling aside attendees one by one into the courtyard, where she would then flip on her video camera and fire a single question at them.

The interviewees—all gathered for the inaugural METRICS conference on “meta-science” to improve biomedical research—had different reactions. Some squirmed uncomfortably at the question, some gamely gave their best answer, and others (like me) dodged it entirely. Aschwanden eventually put them together for a short FiveThirtyEight article—one that, to be honest, was not entirely flattering to the field of statistics.

The hard-hitting question posed to the attendees: “What is a p-value?”

Aschwanden apparently interpreted our fumbling discomfort to mean that no one could say what this statistic really is—not even statisticians themselves.“I figured that if anyone could explain p-values in plain English, these folks could,” she wrote. “I was wrong.” And the subtext, perhaps, was that if experts couldn’t communicate it, then journalists and other non-statisticians shouldn’t feel too bad if they didn’t understand it either.

I think she was missing the point.

To be fair, Aschwanden was in a tough spot as a journalist. She’d recently had to print a correction to an article she’d written for FiveThirtyEight on cloud seeding, in which a p-value of 0.28 had been miscommunicated: “An earlier version of this article misstated the chance that cloud seeding produced a 3 percent increase in precipitation. There was a 28 percent probability that the result was at least that extreme if cloud seeding doesn’t actually work, not a 28 percent chance that the research could have happened by chance.”

Aschwanden’s original language, in turn, had been pulled from the official executive summary for the study her article was based on, the Wyoming Weather Modification Pilot Program: “The primary statistical analysis yielded a RRR [root regression ratio] of 1.03 and a p-value of 0.28. These results imply a 3% increase in precipitation with a 28% probability that the result occurred by chance.” (Ah, yes, the flipped conditional probability.)

So you can see why journalists might be frustrated. How can they convey to their readers the implication of p = 0.28 if they don’t know how to communicate it well themselves, and neither do the expert scientists?

This issue is not limited to p-values, of course. We could be talking instead about confidence intervals, odds ratios, nonparametric methods, Bayesian networks, logistic regression. This is about statistics communication—or, more broadly, “quantitative communication.”

And that leads to why I think Aschwanden’s bit of mathematical “gotcha” journalism ignored bigger issues at hand, but at the same time points to interesting opportunities for the statistical community.

First of all, statisticians are already quite good at communication, by and large, even if it’s not yet a formal part of our training. And what good communicators intuitively know is that audience and purpose are everything.

So I suspect the experts Aschwanden interviewed were uncomfortable with her question not because they didn’t know how to explain a p-value well, but simply because the question itself was devoid of context. It would have been fair to ask her, “Who is the audience? Why do they need to know this? Are you asking what this statistic is, or are you asking how it’s used? Do you have time for a concrete example, or is this just a sound bite?”

There’s no one-size-fits-all explanation for statistical ideas.

Yet while we may already be decent communicators, we can do better still. A good start would be discussions about best practices for discussing our work in different contexts, in which we realize that we may get only five minutes in a courtyard instead of a semester in the classroom.

Aschwanden wrote that her favorite p-value explanation invoked a coin-flip experiment. We could ask: Do examples like these strike the right balance of accuracy, simplicity, and brevity for this audience? Or should we focus on what p-values mean for researchers’ behavior, an idea discussed a few years ago on Andrew Gelman’s blog (, rather than the number itself?

In these pages over the next year I plan to explore what good quantitative communication looks like, what we can learn from the scientists who have found ways to engage better with lay audiences, and also what’s unique about our own communications niche.

Despite the above example, this will not in fact be a column about p-values. Nor will this be a column about grammar, or even writing. Communication is much more than that. Hence the column’s name: explicare is to explain, unfold, disentangle, which feels like the perfect physical manifestation of communicating statistics.

I’d love to hear people’s ideas on this topic, so feel free to drop me a line: