Measuring the unmeasurable
In 2000, whilst working as Chief Medical Officer in England’s Department of Health, I produced a report that was one of the first of its kind. The report, An Organization with a Memory, documented for the first time in the UK and elsewhere the tremendous gap between the safest possible health care and the current health care that the National Health Service was providing. A similar publication in the US that same year from the Institute of Medicine, To Err is Human, showed that deaths due to medical error were as high as between 44 000 to 98 000 annually. More people die in a given year as a result of medical errors than from motor vehicle accidents (43 458), breast cancer (42 297), or AIDS (16 516) (US Institute of Medicine, 2000).
Just over ten years later, we as a global community are still struggling to explain to our patients and their families whether health care is safer. This is in part because improving safety is hard work. It involves changing ingrained behaviours, in redesigning expensive medical technology and in realigning our thinking to focus on systems. But this is only part of the reason. Our inability to show the world where the dashboard of safety and quality is going is also because we have not sufficiently advanced the science of safety measurement.
The measurement conundrum
While frustrating and difficult, the measurement conundrum in patient safety improvement is very specific. There are two primary gaps in our approach to measurement in patient safety that need to be addressed. One is related to how we deliver health care in today's technology-dependent hospitals and health centres around the world. The second is related to the nature of patient safety itself.
Health-care delivery in any setting, low or high resource, is complex. Where you have resources, health-care providers must keep up with a constant stream of new medicines, technologies and procedures. At WHO Patient Safety, we have given our practitioners simple tools to try and reduce the primary causes of harm in complicated procedures such as surgery. Our work with the Harvard School of Public Health, in developing and launching the WHO Surgery Safety Checklist has had a major impact on how surgical teams globally ensure their operating rooms are safe. It takes the amazingly complex world of surgery and focuses it into a simple, low-cost checklist. The results, reported in the New England Journal of Medicine and since backed up with other studies, show as much as a 40% reduction in morbidity and mortality when the Checklist is used.
Let me illustrate, though, how complex these findings - and any findings of safety improvement - can be. One of WHO Patient Safety's complementary programmes, the High 5s Project, is taking the issue of safe surgery and the lessons of this checklist and focusing on avoiding wrong site surgeries through the application of a standard protocol. The project involves the leading safety and quality agencies in nine countries around the world and at its April 2011 Steering Group meeting in Berlin, one of these agencies presented why counting success in surgery is so hard. The presenter showed a video of an orthopaedic surgery team preparing for surgery. They had clearly marked the correct site, something demanded for in the protocol and in the Safe Surgery Checklist. However, as the team sterilizes the site (something also indicated in the protocol and Checklist) the team washes the site four separate times. The indelible marking on the knee is washed off. In a separate example of a cranial surgery, the site was marked just laterally to the ear. Yet, in draping the patient, the site marking was covered up. The question the presenter had was, do these examples count as "correct" or as "incorrect"? For us as practitioners, this is just one example where day-to-day realities may not match our simplified counting methodology for evaluation.
The second measurement issue we face in improving safety is the very nature of the problem itself. We are trying to avoid errors, avoid incidents which result in pain, fear, disability and, sometimes, death for patients. Improvement, means fewer such incidents. What we are really trying to measure is the something that is not happening. Health-care systems around the world have tried to institute patient safety reporting and learning systems to help track and assess trends in these "adverse events" or patient safety incidents. We have made some progress - but only a bit - in coming to a common understanding of what terminology to use in measuring these patient safety incidents around the world. WHO Patient Safety's work in considering an International Classification for Patient Safety is groundbreaking, though we still have a long way to go. WHO has also developed its Global Reporting and Learning Systems Community of Practice where organizations managing these large reporting systems can develop common approaches to improving reporting and learning in patient safety. Yet, the issue of the National Reporting and Learning System in the UK demonstrates how difficult this task can be. The NRLS currently has five million incidents in its database, making it one of, if not the, largest patient safety reporting and learning system in the world. However, because the database in voluntary, there are enormous difficulties describing what is not reported - What are the types of cases that are not reported? Who are the types of practitioners who do not report? What types of patients tend not be reported on? Until we can answer the question of "what do we not know?", it is difficult to be firm about exactly what we do know.
Enormous strides have been made in patient safety over the past ten years since the release of An Organization with a Memory and To Err Is Human. We have developed an array of tools and approaches which are helping keep patients safer all over the world. What we need now is a commensurate effort to systematically measure what works and how it works before we can confidently tell the next generation of patients that they are safer.