Confusing and Misleading Statistics

Page content

Several specific uses of statistics are particularly misleading. These include reporting relative risk instead of absolute risk and conditional probabilities instead of natural frequencies.

An Example of Misleading Statistics: Relative Risk versus Absolute Risk

In 1995, the U.K.’s equivalent of the FDA, the Committee on Safety of Medicines, issued a health warning about the risk of thrombosis (blood clots) in women taking third-generation contraceptives. These pills carry double the risk of the second-generation pills. This was correctly but misleadingly described as a 100% increase in the risk of thrombosis. A risk increase of 100% sounds ominous, and to the statistically illiterate, might even suggest that one has a 100% chance of suffering a thrombosis if one takes third-generation instead of second-generation contraceptives.

The actual risk of thrombosis, however, is 1 in 7000 for second-generation and 1 in 3500 for third-generation pills. The 100% figure describes the relative risk between the two pills, but the value that is more important from a clinical perspective is the absolute risk, which is still very low even for the third-generation pills. The result of publicizing the relative risk instead of the absolute risk of thrombosis led to many women losing confidence in the pill and refusing to use it at all.

Statistics, Screening, and False Alarms

The prevalence of breast cancer in women is about 1%. 90% of these women will have a positive result on a mammogram, a breast cancer screening test. Of the women without breast cancer, about 9% will nevertheless have a positive mammogram. This number sounds small, and to the statistically illiterate, may suggest that a positive screening means there is only a 9% chance that it is a false positive.

A way to avoid making mistakes when determining the probability of a false alarm is to use natural frequencies instead of conditional probabilities. Conditional probabilities are expressed in percentages of percentages and quickly become confusing. Using natural frequencies means applying percentages to real numbers to make them more clear. Given that 1% of all women have breast cancer, 9% of all women will screen positive, and 90% of those with cancer will screen positive, a natural frequencies approach gives the following data for a hypothetical sample of 1000 women:

10 of the 1000 have cancer, and 990 of them do not. 9 of the 10 with cancer will screen positive and 1 will not. A total of 90 women will screen positive; since 9 of these have cancer, that means 81 do not. That means a total of 8.1% of all women screened for breast cancer with mammogram will experience a false positive result! 81 out of 90 (90%) of positive results are false.


Gigerenzer, G.; Gaissmaier, W.; Kurz-Milcke, E.; Schwartz, L. M.; and Woloshin, S. 2008. “Helping Doctors and Patients Make Sense of Health Statistics” (PDF). Psychological Science for the Public Interest 8:2 (53-96).

This post is part of the series: Statistics Sense

What do the numbers and percentages in a health report mean? How can they mislead you and cause you to draw false conclusions? This must-read series summarizes a landmark report in the use and misuse of statistics in medical journalism and health education.

  1. Statistics Sense, Part 1: Making Sense of Medical Statistics
  2. Statistics Sense, Part 2: Confusing and Misleading Statistics
  3. Statistics Sense, Part 3: Comparisons and Transparency