Statistics Sense, Part 3: Comparisons and Transparency
written by: Robyn Broyles•edited by: Leigh A. Zaykoski•updated: 12/19/2008
How should statistical data be compared? Learn how statistics should be presented for maximum clarity.
slide 1 of 5
Comparative statistics present myriad opportunities for confusion. Data sets should consist of the same kind of values because comparing dissimilar values results in nonsense. Not all values provide equally useful information; mortality rates are more helpful than survival rates. The most useful types of statistical values are called "transparent" statistics because they clarify rather than obscure results.
slide 2 of 5
Comparing Apples and Oranges
When statistics for different populations are compared, it is useless to compare different types of measures to each other. For example, if one is measuring prevalence of a disease in two populations, the criteria for determining who has the disease must be the same in both populations. If a researcher uses mammogram screening results (which have been shown to have an 8.1% false positive rate) to measure breast cancer incidence in the first population, the researcher must use the same method in the second population. If the researcher instead uses a different way of measuring cancer prevalence, such as positive biopsy results, it is impossible to usefully compare the two data sets.
slide 3 of 5
Survival Rates and Mortality Rates
5-year survival is a common measure of the effectiveness of cancer screening and treatment. Yet two screening measures with very different 5-year survival rates may have the same mortality. 5-year survival is a measure applied only to a limited population, those people who have been diagnosed with a specific cancer, while mortality measures deaths from that cancer in terms of the general (not just the cancerous) population. Earlier diagnosis can lead to a higher 5-year survival rate simply because early diagnosis means more years between diagnosis and death, even if the earlier diagnosis does nothing to reduce mortality from that cancer.
For example, in the United States a blood test called the PSA test is routinely used to screen for prostate cancer starting at age 60. This leads to an earlier average age at diagnosis for men with prostate cancer than in the United Kingdom, where it is usually not diagnosed until symptom onset. As a result, the five-year survival rate for prostate cancer is almost twice as high in the U.S. as in the U.K. (82% vs. 44%). Yet the mortality from this cancer — which reflects the overall likelihood that any man will die of it in a given year — is almost identical for both countries: 26 in 100,000 for the U.S. and 27 in 100,000 in the U.K.
slide 4 of 5
Using Transparent Statistics
Gigerenzer et al. recommend that researchers, health care providers, and health and science journalists strive to use transparent statistics, that is, ones that are least likely to confuse or to mislead, even unintentionally. Transparent statistics include absolute risk as opposed to relative risk; natural frequencies as opposed to conditional probabilities; and mortality rates as opposed to survival rates. The proper choice of statistics to present in a report will lead to the best public health outcomes in patient education.
What do the numbers and percentages in a health report mean? How can they mislead you and cause you to draw false conclusions? This must-read series summarizes a landmark report in the use and misuse of statistics in medical journalism and health education.