Making progress in patient safety
Peter Pronovost, MD, PhD
The field of patient safety is maturing rapidly. Students are being trained, clinicians and researchers are designing interventions, health systems are implementing programmes, regulators are evaluating practices and governments are launching national agencies. Globally, even in the developing corners of the world, WHO itself is implementing safety projects with its international partners. Collectively, this demonstrates a significant amount of effort. Yet, the evidence that this effort has improved patient outcomes is limited. In the US, for example, the U.S. Centers for Disease Control reported a 58% reduction in blood stream infections in intensive care units between 2001 and 2009. Yet after years of the effort, this remains one of the few stories of national improvements in outcomes.
With a relative paucity of hard outcomes data, we can still state that we have learned much about the nature of safety in health care over the last decade. We learned that there are no quick fixes to this problem of patient safety. To improve, health care will need to keep score with valid measures, be guided by science and commit to work together. We learned that measures must be meaningful and valid to the clinicians who ultimately have to use them to improve safety. We learned that we need science about evidence-based practices and measures, science about interventions to implement those practices and how context informs implementation, science about how to evaluate interventions so that we can ensure patients really are safer. We learned that no one discipline or single theory alone will be sufficient. To make progress, safety will need to draw deeply from human factors and systems engineering, sociology, psychology, anthropology, health services research and clinical medicine. To date, the safety efforts have largely drawn relatively superficial lessons from aviation and other industries without fully embracing these disciplines, bringing the approaches that made those industries safe into health care. For example, health care is likely the only discipline in which operators (doctors, nurses, administrators), rather than trained technical experts in risk investigate their mistakes. If health care is to improve safety, it will need to keep score, embrace science, collaborate.
To improve safety, health care will likely need to identify and implement multi-component interventions, based on theory or logic models about how to improve safety. These interventions could include activities for health system leaders, clinicians, patients. It could include efforts to motivate clinicians about the benefits of the behavior change, efforts to change social norms, economic incentives and regulatory efforts. It could also include efforts to change the system to make it easy for clinicians to perform the desired behaviors. Indeed, when we are aligned by a common goal and measure, we should use as many interventions and incentives as possible to improve safety.
Health care will also need to robustly evaluate the impact of patient safety efforts; identifying both whether patient safety has improved and why. The evaluation of patient safety efforts could be significantly improved. Many patient safety efforts fail to state their "theory of change,", ie how will the intervention make an impact? Many efforts use a bias-prone pre-post evaluation design (ie one time period before an intervention and one time period after), and data quality control, commonly used in all other types of research, is often missing from patient safety efforts. To be accountable to patients, funding agencies, and governmental leaders, those seeking to improve patient safety need to ensure that they evaluate whether the intervention worked. The field is rapidly learning how to evaluate.
Luckily, there is new literature to guide us, literature describing the role of theory, providing publication guidelines, describing study design and research methods, and evaluating the importance of context. From this literature and our own experience at Johns Hopkins School of Medicine, where we house the WHO Patient Safety Office and its work on evaluation, some key points emerge.
Firstly, explicit decisions need to be made on the importance of evaluation in a patient safety effort. A study evaluation could be either part of, or independent of, a patient safety intervention. Some patient safety efforts themselves involve the collection and feedback of data forms as an integral component of the intervention. For example, efforts to reduce blood stream infections commonly monitor and report infection rates. In these efforts, it is important to recognize the need for standardized data collection with robust quality control. If a safety intervention is implemented among a number of sites, it is essential that all sites collect standardized data; if they do not, it is difficult to make inferences about the impact of the intervention among all participating sites. Yet for many other safety efforts, measurement is much more difficult and it may not be feasible to monitor performance. Sometimes, it may be wiser not to collect data rather than collecting significantly biased data.
Secondly, study design is important. Unfortunately, the field seems tangled in the lop-sided debate over the benefit of randomized versus non-randomized study designs. This debate is often unproductive. An oft quoted saying in research circles is that you do not need a randomized trial for whether parachutes work. This is true. Yet, it is important to think deeper and understand why it is true. The intervention (the use of parachutes) is standardized and insensitive to context (ie type of plane, elevation, location), the outcome is unambiguous and immediate, the causal pathway is short and direct, and the association is supported by strong theory. There are few interventions in health care that have these properties. Still, the field would be significantly improved if evaluations moved away from single period pre-post designs toward multiple time series in which data are collected from multiple pre-specified time periods before and after the intervention, ideally with a concurrent control whenever possible.
Thirdly, adjusting methods to fit the situation is vital. It is important to recognize that there is no single theory or model that fits all situations, that diversity and debate advances a field, that questioning methods and results is not blasphemy but rather the surest way to advance the field, and that at the end of the day, results not just efforts are what patients care about. Patient safety needs to create forums for open respectful dialogue regarding ways to advance the field rather than assuming we have the answers.
Finally, document the context for the intervention as well as the results. The science of how to measure context is underdeveloped. It is important to be able to show how the intervention changed over time and why as well as what were the barriers faced and how they were overcome, all in sufficient detail to allow replication; the relevant contextual factors. It is important to use qualitative and quantitative methods to understand whether and why an approach worked. If health care is able to accomplish even just this last guiding principle, learning would accelerate exponentially.
The evaluation of the patient safety initiative in the United Kingdom (UK) provides a model. Benning and colleagues British Medical Journal article1 evaluated a large patient safety effort in the UK, an effort that did not include a control group. While on the surface the intervention appeared to be successful, their evaluation demonstrated that hospitals not receiving the intervention improved to the same extent as those receiving the intervention. Meaning, the intervention provided no additional benefit over historical trends. The authors evaluated both processes and outcomes, and used qualitative and quantitative methods, providing rich insights into how the program was received by clinicians, and importantly, how the results could inform future quality improvement efforts. This type of robust evaluation followed by dialogue between implementers and researchers should enhance learning.
Over the last decade, we learned that improving patient safety is an arduous task, yet it is vital and we have seen success stories. The burden of preventable harm around the globe is substantial, larger than previously thought, and blind to country and patient population boundaries. We are in this together and we must learn and improve together. To improve, we need to more fully embrace the interdisciplinary science of patient safety, we need to measure our results, and we need to hold ourselves accountable and learn from both successes and failures. It is time to roll up our sleeves, partner with other disciplines, welcome dissenting views and constructive dialogues, advance our theories and methods, design, implement and evaluate interventions, and ultimately reduce preventable harm.
1 Benning A, Ghaleb M, Suokas A, Dixon-Woods M, Dawson J, Barber N, Franklin BD, Girling A, Hemming K, Carmalt M, Rudge G, Naicker T, Nwulu U, Choudhury S, Lilford R. Large scale organizational intervention to improve patient safety in four UK hospitals: mixed method evaluation. BMJ 2011;342:d195.