By Walter Wiggins
A recent article published in JAMA Internal Medicine has garnered significant attention in the news over the past several days. The study authors posed the following question to med students, residents, and attendings at a Boston-area teaching hospital affiliated with HMS and BU (presumably the Boston VA):
“If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming you know nothing about the person’s symptoms or signs?”
Alarmingly, approximately 75% of respondents answered the question incorrectly with 27 of 61 responding that the chance of having the disease was “95%.” The authors reported the correct result to be “2%;” although, in reality, a more precise answer would be “no more than 2%”…but more on that in a minute.
At first glance, it is really worrisome that a majority of respondents – all of whom have completed at least two years of medical education/training and some of whom are in independent practice – couldn’t answer the question. However, when you dig a little deeper into the phrasing of the question and how one would encounter a similar question in clinical practice, the root of the problem becomes apparent and its impact on clinical practice is a little less worrisome.
If someone were to ask the same group the following question, I strongly believe (and certainly hope) the majority of respondents would be able to arrive at the appropriate conclusion:
“You screen a patient for disease X with screening test Y. The test comes back positive. Should you (a) begin treatment for disease X or (b) pursue further diagnostic evaluation of the patient for disease X prior to making any treatment recommendations?”
The above phrasing embodies the spirit of the question posed by the study’s authors, but likely in a more familiar context to that which clinicians in all stages of training encounter the root problem in clinical practice. Unfortunately, while this may make us feel a little better about the results of the study (and the ability of our colleagues in Boston to practice evidence-based medicine – or EBM), it understates the root problem astutely uncovered by the aforementioned study. That problem being that today’s physicians aren’t adequately educated in the fundamentals of EBM, such as biostatistics, population health, and epidemiology.
Looking back to the original question, what the study authors were evaluating was the ability of medical students and clinicians to calculate the positive-predictive value (or PPV) of a screening test. To do this, they presented the study participants the prevalence of the tested-for disease in the population and the false-positive rate of the test, which is (1 – specificity). However, one had to assume a sensitivity of 100%, as no false-negative rate or sensitivity was stated in the question. Employing the sensitivity and specificity along with the prevalence of the disease in the general population, one can calculate the number of true positives and false positives in a representative population (say, 10000 people, to keep the math simple), then calculate PPV = TP/(TP + FP).
When I was coming through the preclinical years of med school, Pop-Epi – as we called the course alleged by the curriculum admins to teach the above concepts – was a total joke to most of my classmates. I believe the average attendance to these classes was on the order of 15% of our class. On the one hand, people just didn’t think it was important enough to take seriously; but, to be fair to my classmates, that importance wasn’t appropriately emphasized by the curriculum admins and the course itself simply wasn’t very engaging.
The only time this material became important to us was when we entered our dedicated study period for the USMLE Step 1 and realized there was a nontrivial portion of the exam dedicated to these basic principles. Even then, once we cleared the hurdle that is Step 1, our attention was not drawn back to this subject until the next Step came along.
Clearly, at my institution, these topics were underserved by our curriculum. To give my school credit, I will say that abstractions of these concepts were hammered home on almost a daily basis, particularly during the clinical years. For example, my rephrased version of the question from the study was posed in various forms throughout my med school experience.
To wrap it up, it seems evident to me that med schools still have a long way to go in emphasizing the basic principles of EBM in their curricula, if they hope to close “the Stats Gap” and achieve their goal of properly training their students in the practice of EBM. In the coming days, I’ll post a follow-up article covering the basic biostats of screening and diagnostic tests as you may encounter them on the USMLE
1 thought on “The Stats Gap: Identifying Underserved Topics in Medical Education”
… hence, what is the solution? Thanks. Hans
Comments are closed.