Author
Listed:
- Tanner J. Caverly
- Allan V. Prochazka
- Brandon P. Combs
- Brian P. Lucas
- Shane R. Mueller
- Jean S. Kutner
- Ingrid Binswanger
- Angela Fagerlin
- Jacqueline McCormick
- Shirley Pfister
- Daniel D. Matlock
Abstract
Background . Risk interpretation affects decision making. Yet, there is no valid assessment of how clinicians interpret the risk data that they commonly encounter. Objective . To establish the reliability and validity of a 20-item test of clinicians’ risk interpretation. Methods . The Critical Risk Interpretation Test (CRIT) measures clinicians’ abilities to 1) modify the interpretation based on meaningful differences in the outcome (e.g., disease specific v. all-cause mortality) and time period (e.g., lifetime v. 10-year mortality), 2) maintain a stable interpretation for different risk framings (e.g., relative v. absolute risk), and 3) correctly interpret how diagnostic testing modifies risk. There were 658 clinicians and medical trainees who participated: 116 nurse practitioners (NPs) at a national conference, 273 medical students at 1 institution, 148 residents in internal medicine at 2 institutions, and 121 internists at 1 institution. Participants completed a self-administered paper test during educational conferences. Seventeen evidence-based medicine experts took the test online and formally assessed content validity. Eighteen second-year medical students were recruited to take the test and a retest 3 weeks later to explore test-retest correlation. Results . Expert review supported test clarity and content validity. Factor analysis supported that the CRIT identifies at least 3 separable areas of clinician knowledge. Test-retest correlation was fair (intraclass correlation coefficient = 0.65; standard error = 0.15). Scores on our test correlated with other tests of related abilities. Mean test scores varied among groups, with differences in prior evidence-based medicine training and experience (93 for NPs, 101 for medical students, 101 for residents, 103 for academic internists, and 110 for physician experts; P
Suggested Citation
Tanner J. Caverly & Allan V. Prochazka & Brandon P. Combs & Brian P. Lucas & Shane R. Mueller & Jean S. Kutner & Ingrid Binswanger & Angela Fagerlin & Jacqueline McCormick & Shirley Pfister & Daniel D, 2015.
"Doctors and Numbers,"
Medical Decision Making, , vol. 35(4), pages 512-524, May.
Handle:
RePEc:sae:medema:v:35:y:2015:i:4:p:512-524
DOI: 10.1177/0272989X14558423
Download full text from publisher
References listed on IDEAS
- repec:cup:judgdm:v:7:y:2012:i:1:p:25-47 is not listed on IDEAS
Full references (including those not matched with items on IDEAS)
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:sae:medema:v:35:y:2015:i:4:p:512-524. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: SAGE Publications (email available below). General contact details of provider: .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.