Layla Parast writes:
A few years ago, my dad had a heart attack. (He continues to claim it was just a “discomfort.”) When he was in the emergency room, his doctor told him he needed heart surgery right away. My dad had never had a major surgery and had no intention of starting then, so he said no and went home. He got a second and a third opinion, both confirming that he needed surgery. He then asked me if he should get the surgery. While I go by “Dr. Parast,” I am admittedly not the useful kind of doctor. I can’t save anyone from dying, but I can give them a pretty good probability that they are going to die—most definitely not the same thing. As my dad’s only child, I knew that he would listen to whatever I said. I was the only one in the world that he would listen to. But if I told him to get the surgery, and he died in surgery, I would never forgive myself.
I did what I do best and dove into the peer-reviewed literature. What I was looking for was simple: I wanted the estimated probability (and a confidence interval) that he would survive the next 20 years given he has the surgery vs. does not have the surgery. Not having the surgery doesn’t mean doing nothing—he went vegan, he started going to spin class every day (though I think he mostly chatted with the other retired men in spin class), and he was taking care of himself. Risk prediction is what I do, I know it very well. I rolled up my sleeves and thought, “I’ve got this.” But, I could not find those probabilities that I was looking for. I didn’t want those probabilities for a general person. I didn’t want them for a “typical” 65-year-old man without diabetes (which described my dad). I wanted those probabilities for my dad, my dad with all of his individual characteristics, his habits, his idiosyncrasies. Of course, this is supposedly the future of personalized medicine and a major promise of artificial intelligence (AI) in healthcare—but the reality falls short of the promise.
What did I end up doing? I talked to every doctor I could find. Over the years I have collaborated with many physicians who I can text on the weekend with questions about my children (What is this rash? Should I go to the emergency room? Here’s a picture!), myself, and now, my dad. And I listened to them. (Admittedly, they generally said: if three cardiologists have reviewed your dad’s file and said he needs surgery, then he should get the surgery.) My dad got the surgery, and he is fine—no longer vegan, but still going to spin classes (and Zumba!).
I chose to study math in college because, like so many of us, I was drawn to its inherent certainty. In math, there typically exists a definitive answer: though the path to that answer may vary through different proofs and approaches, the outcome is (usually) singularly certain. It’s somewhat paradoxical, then, that for graduate school I found myself gravitating towards statistics—a field that embodies the antithesis of certainty. In statistics, the truth is unknown, and every conclusion we draw comes with uncertainty. In fact, if you already know the truth then you probably don’t need a statistician.
One of my favorite moments in teaching introductory statistics is on Day One, when I give an example described in Nate Silver’s The Signal and the Noise. Suppose you were living in Grand Forks, ND, in April 1997, when weather officials were anticipating high river levels from snow runoff. The town’s levees are 51 feet tall. The National Weather Service reported the river was predicted to crest at 49 feet. The question to the class is: would you evacuate your home? One particularly memorable answer was from a student who said: “It doesn’t matter what they predict. A man always goes down with his ship.” Then I tell the class what the news report left out: namely, that the margin of error was ±9 feet. There’s an audible gasp in the class. And that is where statistics shines. Most of these students have never taken a statistics class, and likely don’t even know what exactly “margin of error” means, but they intuit that plus or minus nine feet is a critical piece of information and it dramatically changes their decision to evacuate. (In case you haven’t read the book, the river crested at 54 feet and caused significant damage to the city.)
What does the decision of whether to evacuate in Grand Forks have to do with my dad’s decision about heart surgery? Both stories center on the profound human element inherent in decision-making under uncertainty. As humans, we make decisions based not just on data but also based on our emotions. Even if I were given the most advanced AI-generated probabilities for my dad’s situation, I am sure I would have done the same thing and talked to other humans. Similarly, my student who would “always go down with his ship” didn’t care what the National Weather Service predicted, he was going to ride it out at home, come hell or high water (literally).
As statisticians, we attempt to quantify uncertainty, despite the fact that at the end of the day, no matter what our models say, many of our decisions will be guided by emotion. But in the meantime, we pursue ever-better models to provide increasingly accurate quantifications of uncertainty—a pursuit that fortuitously gives me some job security, at least for the near future.