Some people want everyone to accept what â€śscienceâ€ť says, even when they cannot really justify that from the actual evidence and facts.

For instance, Donald Prothero in *Reality Check* (Indiana University Press, 2013), spends countless words saying things like â€ś*nothing in real science is 100% proven*â€ť (italic emphasis in original) mixed in with â€śif something has a 99% likelihood of occurring, or being true, then this level of confidence is so overwhelming that it would be foolish to ignore itâ€ť (p. 32). He illustrates this by the high likelihood of injury or death if one jumps off a building.

Then comes a typical piece of misdirection about the likelihood of getting cancer if one smokes, because â€śthe link between cancer and smoking is about 99%â€ť.

In the first place, the evidence for jumping off a building and for cancer causing smoking are of an entirely different order. In the second place, no source is given for the claim of â€śabout 99%â€ť for the cancer-smoking link.

The observable evidence about jumping off buildings is quite direct, no inferences needed. On the other hand, the link between cancer and smoking is based on inferences from data that are probabilistic: analyzing records from people who have smoked varying amounts for varying lengths of time and applying statistical tests of significance.

But most subtly misleading or deceitful is that â€śabout 99%â€ť assertion. A similar point crops up in a number of quite different matters. Probabilities cannot be turned around, one might say they are not â€ścommutativeâ€ť. (A + B is commutative because it equals B + A. There are many operations in mathematics that are not commutative.)

If someone dies of lung cancer, there is a high likelihood that smoking may have been a causative factor; but that is not the same as saying that smoking is highly likely to cause death by lung cancer, and the second statement does not follow from the first. The commutated probability that a smoker will die of lung cancer is not very high:

â€śSmoking accounts for 30 percent of all cancer deaths and 87 percent of lung cancer deathsâ€ť but â€śfewer than 10 percent of lifelong smokers will get lung cancerâ€ť

(Christopher Wanjek, â€śSmokingâ€™s many myths examinedâ€ť).

I. J. Good discussed this general issue in relation to the trial of O. J. Simpson for the murder of his wife, given the acknowledged circumstance that Simpson was an habitual wife-batterer. Alan Dershowitz, assisting the defense, had pointed out that only about 0.1% of wife-batterers go on to actually kill their wives. But this was misleading. The pertinent probability must be calculated as follows: *Given that* a wife is murdered, and *given that* the husband is an habitual wife-batterer, what is the probability that the husband did it? Good showed that it was greater than about 1 in 3 (*Nature* 375 [1995] 541). In a later piece, Good reported that Dershowitzâ€™s 0.1% was itself misleading, and the correction raised the pertinent probability from >1/3 to about 90% (*Nature* 381 [1996] 481).

The probability that the murdered wife of a battering husband was killed by the husband is high. The commutated probability that a wife-batterer will actually kill his wife is very small.

It is quite damaging to public and personal health that such basic issues concerning probabilities are so little understood among doctors. For example, what is the probability that a woman between 40 to 50 years of age and with no manifest symptoms or family history of breast cancer actually has breast cancer if her mammogram is â€śpositiveâ€ť? A survey of doctors yielded estimated probabilities of >50%, many of them at about 90%; but the actual probability is only 9% (Steven Strogatz, â€śChances areâ€ť).

A fundamental point is that no test is 100% specific and 100% accurate. All tests have some probability, even if only small, of yielding a false positive. If a particular condition is rare, then the likelihood of a positive test being false can be quite high: in low-risk populations, a high proportion of â€śpositivesâ€ť are actually *false* positives (Jane M. Orient, *Art & Science of Bedside Diagnosis*, 2005).

The probability that a woman with breast cancer will have a positive mammogram is very high. The commutated probability that a woman with a positive mammogram has breast cancer is *not* high.

This sort of issue is very damaging when it comes to diagnosing mental illness, discussed at length in *Saving Normal* by Allen Frances and *The Book of Woe* by Gary Greenberg (both 2013; see my essay review in *Journal of Scientific Exploration*, 29 [2015] 142-8). The critical problem is that there exists no objective diagnostic test for a mental illness, diagnosis has to be gauged on the basis of observable symptoms. One classic procedure for diagnosing depression is the Hamilton Depression Rating Scale (HAM-D). It was evolved in the 1950s by British doctor Max Hamilton, who was seeking a way to measure efficacy of anti-depressants, using his depressed patients as guinea pigs (see for example Gary Greenberg, *The Noble Lie*, 2008, p. 55 ff.). Hamilton came up with 17 items â€” for instance insomnia, feelings of guilt, sleep, appetite â€” rated on scale of 0 to 4 or 0 to 2, with a possible maximum total of 52. There is nothing objective here since the assigned points depend on what the patient says and what the tester concludes; and the diagnosis also uses arbitrary cut-off points: 0-7 = normal, 8-13 = mild depression, 14-18 = moderate depression, 19-22 = severe depression, â‰Ą23 = very severe depression. But the point here is not about subjectivity or arbitrariness of the diagnosis, but the fact that HAM-D was evolved by looking at patients who had already been diagnosed as depressed severely enough to require treatment, even hospitalization. However, the fact that depressed patients frequently accumulate high scores on this questionnaire does not entail the commutated reverse, that anyone who scores more than 7 is to some extent â€śdepressedâ€ť or at â‰Ą18 severely depressed.

Confusion about what statistics and probability mean, about interpreting such data with their seemingly accurate numbers, is a hazard in public discourse on a host of matters in science and in medicine. Misinterpretation is common and damaging.