Please use this identifier to cite or link to this item:
Title: What counts? An ethnographic study of infection data reported to a patient safety program.
Authors: Dixon-Woods, M
Leslie, M
Bion, J
Tarrant, C
First Published: Sep-2012
Citation: MILBANK Q, 2012, 90 (3), pp. 548-591
Abstract: Context: Performance measures are increasingly widely used in health care and have an important role in quality. However, field studies of what organizations are doing when they collect and report performance measures are rare. An opportunity for such a study was presented by a patient safety program requiring intensive care units (ICUs) in England to submit monthly data on central venous catheter bloodstream infections (CVC-BSIs). Methods: We conducted an ethnographic study involving ∼855 hours of observational fieldwork and 93 interviews in 17 ICUs plus 29 telephone interviews. Findings: Variability was evident within and between ICUs in how they applied inclusion and exclusion criteria for the program, the data collection systems they established, practices in sending blood samples for analysis, microbiological support and laboratory techniques, and procedures for collecting and compiling data on possible infections. Those making decisions about what to report were not making decisions about the same things, nor were they making decisions in the same way. Rather than providing objective and clear criteria, the definitions for classifying infections used were seen as subjective, messy, and admitting the possibility of unfairness. Reported infection rates reflected localized interpretations rather than a standardized dataset across all ICUs. Variability arose not because of wily workers deliberately concealing, obscuring, or deceiving but because counting was as much a social practice as a technical practice. Conclusions: Rather than objective measures of incidence, differences in reported infection rates may reflect, at least to some extent, underlying social practices in data collection and reporting and variations in clinical practice. The variability we identified was largely artless rather than artful: currently dominant assumptions of gaming as responses to performance measures do not properly account for how categories and classifications operate in the pragmatic conduct of health care. These findings have important implications for assumptions about what can be achieved in infection reduction and quality improvement strategies.
DOI Link: 10.1111/j.1468-0009.2012.00674.x
eISSN: 1468-0009
Type: Journal Article
Appears in Collections:Published Articles, Dept. of Health Sciences

Files in This Item:
There are no files associated with this item.

Items in LRA are protected by copyright, with all rights reserved, unless otherwise indicated.