“When you have eliminated the impossible, whatever remains, however improbable, must be the truth”–surely a familiar quote to anyone who has spent time with Sherlock Holmes. It is a reassuring sentiment, the idea that there is only possible answer and a methodical process of elimination will lead you to it. And indeed, Holmes’ world operates that way (unless you believe Pierre Bayard who claims Holmes and even Arthur Conan Doyle were wrong about The Hound of the Baskervilles). Such certainty is more elusive in our world, where simply ruling out a few scenarios is rarely sufficient to establish a remaining explanation as definitive. What about all the explanations you haven’t even thought of?
One thing Holmes (and Doyle) is right about: probability and plausibility are not always reliable indicators of what is true. Recently, physicist Sarah Salviander summed it up thusly:
I got many responses like this to my "extraordinary claims" tweet that just prove my point. People will believe things w/o evidence based on tangential knowledge, like odds. But tangential knowledge is not evidence. It's not about what's BELIEVABLE, but what's EVIDENTIALLY TRUE. https://t.co/cNLF1ocQV9
— Sarah Salviander (@sarahsalviander) February 23, 2019
She was speaking most directly about the evidence for Christianity and its historical particulars, referencing the “extraordinary claims require extraordinary evidence” critique. Of course the claim that Jesus rose from the dead is unlikely, incredible even. Nevertheless, evidence supports that claim. That evidence is the reason Christians believe, not a willful ignorance of the implausibility. But Salviander’s comments also apply just as aptly to science. Plenty of scientific inferences are met first with incredulity, such as Einstein’s famous “God does not play dice” critique of the indeterminacy implied by quantum mechanics. I’ve thought before about how intuition and plausibility influence reaction to topics like evolutionary biology, but Salviander’s observation summed up the issue succinctly.
Now, in the spirit of her quote, I didn’t want to just spin you a plausible story. There are examples of odds-based reasoning in the thread Salviander quotes, so that tells us it can happen, but not how often or why. My first attempts at searching more deeply on the topics of evidence, plausibility and probability turned up a range of literature from law journals (e.g. Trial by Traditional Probability, Relative Plausibility, or Belief Function?). It certainly makes sense that legal scholars would be concerned with standards for evidence evaluation and evidence-based decision making, but I am also certain I am not a legal scholar so I don’t have any real context to interpret these papers.
Then I found a paper a little closer to territory I’m familiar with–Embracing Uncertainty: The Interface of Bayesian Statistics and Cognitive Psychology by Judith Anderson, which is actually from an ecology journal (Conservation Ecology) but has a helpful review of cognitive science literature on how humans process probabilistic concepts and questions. There’s a lot going on in that paper, but I primarily want to call your attention to the section “Adapting Bayesian Analysis to the Human Ming: Guidelines from Cognitive Science.” First, near and dear to my interest in language and overloaded words is a discussion of different subjective understandings of the concept of probability. In the language of Reverend Bayes‘ theorem (from which we get Bayesian analysis), some of these concepts relate more strongly to prior belief, as in prior to evidence, and some relate to posterior probability, as in after consideration of evidence. Conflating prior and posterior beliefs by using the same word for both could lead to conflation of what is believable (i.e. higher prior belief) and what is evidentially true (i.e. higher posterior probability). In other words, it is not sufficient to cite a probability estimate to dispute a claim without discussing what kind of probability it is.
Also helpful is the differentiation of frequencies and single-event probabilities. Frequencies are counts of a specific event out of all possible related events within a well-defined class. A classic example would be the count of “ones” out of all possible rolls of a single die, say three out of twenty. A single-event probability quantifies the likelihood of, well, a single event. We routinely use frequencies to estimate probabilities. But once you get beyond simple scenarios like the games of chance, identifying the relevant frequencies to use for an estimate is challenging. As Anderson puts it, “the single population in question might be a member of infinitely many reference classes”–classes we might use to tabulate frequencies. Or returning to Doyle and Holmes (paraphrasing actual philosopher William Winwood Reade and laying a foundation for Isaac Asimov’s fictional psychohistory): “while the individual man is an insoluble puzzle, in the aggregate he becomes a mathematical certainty.”
Not only are single events difficult to predict, but Anderson references cognitive science research showing that humans do not reason well about single-event probabilities. Reframing questions in terms of frequencies and clearly delineated classes seems to improve results, perhaps by aligning better with our cognitive skills or the way our brains are used to processing information. So we have at least some evidence that probabilities are not always the most appropriate tool. And that’s in situations where quantitative analysis is warranted. As Salviander points out, not all questions are even a matter of probabilities or frequencies. Hopefully this will be helpful the next time you have a “what are the odds?” question or someone poses one to you, about the history of the Gospels or science or any other topic. Consider whether the question can be reframed in a way that leads more clearly to answer, and whether there is evidence that goes beyond matters of plausibility.