Understanding the interpretation of study results

By

Community

January 17, 2019 - 9:42 AM

Dear Dr. Roach: In a recent column discussing the use of aspirin, you made reference to the results of a study that were neither “statistically significant” nor “clinically meaningful.”  While I am quite familiar with the concept of statistical significance and believe it is very helpful in interpreting the results of a study, I am not familiar with the latter concept. Are you referring to a poorly designed study or one with too small of a sample? I am wondering whether “clinically meaningful” adds any benefit to the scientifically accepted concept of statistical significance. — P.J.B.

Answer: Statistical significance is a concept central to understanding medical or other scientific studies. Often, one group (an experimental group, who may get a new treatment, for example) is compared with another group (the control group, who got some other treatment, usually the standard treatment or, if there is no standard treatment, a placebo). The difference in the outcome is looked at between the two groups. 

A statistician employs one of several methods to calculate the likelihood that the difference between the two groups could have happened by chance, called the p-value. If the p-value is less than 5 percent, then that is usually considered statistically significant. The lower the p-value, the less the likelihood that the observed difference between the two groups could have happened by chance if the two treatments were identically effective.

Clinical significance, or clinical meaningfulness, refers to the real-world usefulness of the intervention. While the term “clinical significance” is subjective and therefore an opinion, the term nevertheless is useful. For example, with a very large trial, a small difference in effectiveness between the two treatments could have a very significant p-value, as low as 0.0001, with only a 1 in 10,000 chance that the two treatments are equivalent. However, the effectiveness may be 50.1 percent in one group and 49.9 percent in the other group. Although statistically significant, the clinical significance is marginal.

If a result is not statistically significant, it cannot be considered clinically meaningful, even if the difference is large, because there is not enough evidence to reject the hypothesis that the difference could have occurred by chance.

It is logically impossible to prove the absence of a difference between two treatments. All we can do is show that it is unlikely that the two treatments are identically effective. Unfortunately, this means that concepts long regarded as true sometimes can be upended by new information. Medicine is not mathematics: Mathematicians can be sure that once a concept is proved, it will stay proved. That is not the case for clinical trials in medicine. 

      

Dear Dr. Roach: I am an 89-year-old widow trying to stay in my home with the help of my daughter. Six months ago, I had a bad fall and went to the hospital, where I got all kinds of tests and lots of information from several doctors. A neurologist said I had two or three aneurysms in my head but treatment might cause more harm than good. Now home, my primary doctor is trying to help my anxiety. He gave me Zoloft but I got depressed. He wants to try Effexor, but the instruction sheet mentioned that bleeding might occur. Should I avoid this medicine? -— J.M.G.

Answer: Venlafaxine (Effexor) has been shown to increase the risk of bleeding, with only a slight risk for most people. However, bleeding from a brain aneurism is very dangerous, so I think I would avoid that particular drug. There are alternatives that don’t have that risk. Even if the risk is slight, worrying about a drug’s possible side effects isn’t going to make your anxiety better.

Related