How to Avoid Being Duped by Shoddy Research
The Centers for Disease Control (CDC) recently published a guide “How Do You Know Which Health Care Effectiveness Research You Can Trust? A Guide to Study Design for the Perplexed.” The guide is designed to help policymakers, journalists, and the general public understand which study designs are the most trustworthy and distinguish between various types of health research.
The CDC emphasizes:
Many studies of health care effectiveness do not show the cause-and-effect relationships that they claim. They have faulty research designs. Mistaken conclusions later reported in the news media can lead to wrong-headed policies and confusion among policy makers, scientists, and the public. Unfortunately, little guidance exists to help distinguish good study designs from bad ones, the central goal of this article.
Among the research flaws the CDC identifies:
- Healthy User Bias—Is a study comparing a healthier population with a not-so-healthy one? The CDC highlights that studies showing the flu vaccine could reduce the number of pneumonia-related hospital admission and deaths failed to account for the fact that the elderly subjects who received the flu vaccine were already healthier, more active adults than those who did not receive the vaccine. The conclusion showing that elderly individuals who received the flu vaccine were less likely to be hospitalized for pneumonia failed to adequately account for the fact that these individuals were simply less likely to develop pneumonia because they were already healthier than the comparison group.
- Social Desirability Bias—Are subjects likely to inaccurately self-report health behaviors to conform with socially desirable behavior? The CDC uses the example of childhood obesity studies in which mothers self-reported that their children watched less television than they actually watch. If research participants believe certain answers are more socially desirable (their children eat less fatty foods and exercise regularly) than they’re more likely to give researchers those answers. This is a common problem in nutrition and obesity studies—participants report they ate less and/or exercised more than they actually did, making it difficult for researchers to draw accurate conclusions.