Special Article
Academic Calculations versus Clinical Judgments: Practicing Physicians’ Use of Quantitative Measures of Test Accuracy 1

https://doi.org/10.1016/S0002-9343(98)00054-0Get rights and content

Abstract

Purpose: To determine how often practicing physicians use the customarily recommended quantitative methods that include sensitivity, specificity, and likelihood ratio indexes; receiver operator characteristic (ROC) curves; and Bayesian diagnostic calculations.

Participants and Methods: A random sample of 300 practicing physicians (stratified by specialty to include family physicians, general internists, general surgeons, pediatricians, obstetrician/gynecologists, and internal medicine subspecialists) were briefly interviewed in a telephone survey. They were asked about the frequency with which they used the formal methods, the reasons for non-use, and if they employed alternative strategies when appraising tests’ diagnostic accuracy.

Results: Of the 300 surveyed physicians, 8 (3%) used the recommended formal Bayesian calculations, 3 used ROC curves, and 2 used likelihood ratios. The main reasons cited for non-use included impracticality of the Bayesian method (74%), and nonfamiliarity with ROC curves and likelihood ratios (97%). Of the 174 physicians who said they used sensitivity and specificity indexes, 165 (95%) did not do so in the recommended formal manner. Instead, the physicians directly estimated tests’ diagnostic accuracy by determining how often the test results were correct in groups of patients later found to have, or to be free of, the selected disease.

Conclusions: The results indicate that most practicing physicians do not use the formal recommended quantitative methods to appraise tests’ diagnostic accuracy, and instead report using an alternative direct approach. Although additional training might make physicians use the formal methods more often, the physicians’ direct method merits further evaluation as a potentially pragmatic tool for the determination of tests’ diagnostic accuracy in clinical practice.

Section snippets

Definitions for Diagnostic and Test Accuracy

In this report, diagnostic accuracy is defined as the ability of a test to correctly predict the presence of a particular disease among patients with positive test results, and to indicate absence of the disease among patients with negative results. Test accuracy, in contrast, is defined by other quantitative expressions, which include sensitivity (ie, proportion of diseased patients with positive test results), specificity (proportion of nondiseased patients with negative test results), and

Results

Of the 333 physicians contacted by telephone, 302 (91%) agreed to participate. Two physicians were interviewed but promptly excluded because they reported not enough time (<40%) in direct patient care. The 31 physicians who declined to participate said they were too busy (17), not interested (9), or did not do phone surveys (5). Participation rates for the six medical specialties ranged from 89% (SIM) to 96% (GIM).

The 300 participating physicians had a median age of 46, were predominantly male

Discussion

The results of this survey indicate that practicing physicians seldom use the recommended formal methods for diagnostic evaluations. Fewer than 25% of the surveyed physicians considered sensitivity and specificity values before ordering tests in clinical practice; and the recommended transformational methods, which require formal calculations to estimate the probability of disease (or nondisease), were almost never used. The Bayesian transformation approach, which is the most widely advocated

References (20)

  • MA Hlatky et al.

    Factors affecting sensitivity and specificity of exercise electrocardiography. Multivariate analysis

    Am J Med

    (1984)
  • LB Lusted

    Introduction to Medical Decision Making

    (1968)
  • HC Sox et al.

    Medical Decision Making

    (1988)
  • DL Sackett et al.

    Clinical EpidemiologyA Basic Science for Clinical Medicine

    (1991)
  • MC Weinstein et al.

    Clinical Decision Analysis

    (1985)
  • GT O’Connor et al.

    Bayesian reasoning in medicine: The contributions of Lee. B. Lusted, M.D

    Med Decis Making

    (1991)
  • PF Griner et al.

    Selection and interpretation of diagnostic tests and proceduresprinciples and applications

    Ann Intern Med

    (1981)
  • B Dujardin et al.

    Likelihood ratiosa real improvement for clinical decision making?

    Euro J Epidemiol

    (1994)
  • JP Kassirer

    Diagnostic reasoning

    Ann Intern Med

    (1989)
  • SG Pauker et al.

    Decision analysis

    NEJM

    (1987)
There are more references available in the full text version of this article.

Cited by (110)

  • Response

    2020, Chest
  • Physicians’ understanding of CT probabilities in ED patients with acute abdominal pain

    2018, American Journal of Emergency Medicine
    Citation Excerpt :

    Furthermore, when physicians are asked to calculate posttest probability, the results are usually incorrect. Steurer et al. has previously described how a large group of general practitioners incorrectly calculated the posttest probability a hypothetical case scenario, even though most of the surveyed physicians understood the concepts of sensitivity and positive predictive value [14]. To our knowledge, our study is the first to evaluate how physicians estimate posttest probability in real time practice.

  • Preoperative MRI evaluation of lesion–nipple distance in breast cancer patients: thresholds for predicting occult nipple–areola complex involvement

    2018, Clinical Radiology
    Citation Excerpt :

    Binary logistic regression was used to measure the relationship between NAC tumoural involvement (NAC+) and the various continuous and categorical dependent variables. The frequency distributions of the LND for the NAC+ cases and cases with no tumoural NAC involvement (NAC–) were used to compute the stratum (or Level) specific likelihood ratios (SSLR) defined as the ratio of the two frequencies at different levels.15–17 The frequency distributions were also the basis for computing the receiving operating characteristics (ROC) curve for measuring the discriminating ability of LND through the area under the curve (AUC).

  • Reply

    2015, Clinical Gastroenterology and Hepatology
  • Role of laboratory tests in rheumatic disorders

    2015, Rheumatology: Sixth Edition
  • How to investigate new-onset polyarthritis

    2014, Best Practice and Research: Clinical Rheumatology
View all citing articles on Scopus
1

This work was done when Drs. Reid and Lane were fellows in the Robert Wood Johnson Scholars Program at Yale.

2

Dr. Lane’s current address is Department of Primary Care, Navy Medical Center, San Diego California.

View full text