Author
Listed:
- Hector Rodriguez
- Ted Glahn
- Angela Li
- William Rogers
- Dana Safran
Abstract
Background: The use of item screeners is viewed as an essential feature of quality survey design because only respondents who are ‘qualified’ to answer questions that apply to a subset of the sample are directed to answer. However, empirical evidence supporting this view is scant. Objective: This study compares data quality resulting from the administration of ambulatory care experience measures that use item screeners versus tailored ‘not applicable’ options in response scales. Methods: Patients from the practices of 367 primary care physicians in 65 medical groups were randomly assigned to receive one of two versions of a well validated ambulatory care experience survey. Respondents (n=2240) represent random samples of active established patients from participating physicians’ panels. The ‘screener’ survey version included item screeners for five test items and the ‘no screener’ version included tailored ‘not applicable’ options in response scales instead of using screeners. The main outcomes measures were data quality resulting from the two item versions, including the mean item scores, the level of missing values, outgoing patient sample sizes needed to achieve adequate medical group-level reliability, and the relative ranking of medical groups. Results: Mean survey item scores generally did not differ by version. There were consistently fewer respondents to the ‘screener’ versions than ‘no screener’ versions. However, because the ‘screener’ versions improved measurement precision, smaller outgoing patient samples were needed to achieve adequate medical group-level reliability for four of the five items than for the ‘no screener’ version. The relative ranking of medical groups did not differ by item version. Conclusion: Screeners appear to reduce noise by ensuring that respondents who are not ‘qualified’ to answer a question are screened out instead of providing unreliable responses. The increased precision resulting from ‘screener’ versions appears to more than offset the higher item non-response rates compared with ‘no screener’ versions. Copyright Adis Data Information BV 2009
Suggested Citation
Hector Rodriguez & Ted Glahn & Angela Li & William Rogers & Dana Safran, 2009.
"The Effect of Item Screeners on the Quality of Patient Survey Data,"
The Patient: Patient-Centered Outcomes Research, Springer;International Academy of Health Preference Research, vol. 2(2), pages 135-141, June.
Handle:
RePEc:spr:patien:v:2:y:2009:i:2:p:135-141
DOI: 10.2165/01312067-200902020-00009
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:patien:v:2:y:2009:i:2:p:135-141. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.