Decide monitoring strategies before setting targets
Results-oriented approaches not only make good sense for organizations pursuing accountability; such approaches are essential. Yet results-oriented approaches raise problems for the global health community for a number of reasons. First, a commitment to results may be less binding than a commitment to actions, because the latter requires those making the commitment to take action whether or not the results are achieved.1 Second, countries that commit to results-based approaches may have limited knowledge of the actions that are needed to achieve the results. Third, if results are not achieved, managers in the public sector or in international agencies are rarely held accountable, unlike managers in the private sector.
Although these are not new observations, during the past decade greater emphasis has been placed on quantifiable targets. This has led to a proliferation of indicators and other ways of measuring of whether or not targets are reached. Consumers would not trust companies whose sales reports are based on managers’ opinions, or on sales of other products. Such reports should be based on sales data of the product in question, and on whether these increase or decrease over a given period of time. In other words, results-based commitments require a relevant baseline indicator and should directly measure subsequent changes in this.
However, quantifiable targets are not necessarily easily measurable on a regular basis. Data availability is the key in monitoring progress towards targets such as the Millennium Development Goals (MDGs). This fact is overlooked by many statistics users because the numbers continue to be published annually and the users assume they represent meaningful data.
The MDG indicators show this is not always the case. There is a stark contrast between the data available on under-five mortality, the indicator for MDG 4, and maternal mortality ratios, against which MDG 5 is monitored. Under-five mortality rates are derived from civil registration systems, censuses and household surveys. In most countries, there are data points available over time, and these are analysed to obtain the best current estimate. Measuring maternal mortality ratios has been a greater challenge because, compared with deaths among children, maternal deaths are rare events. In countries without a complete civil registration system and medical certification, large-scale household surveys or censuses using verbal autopsy techniques provide estimates of the ratio, since facility-based statistics are inherently biased. Even then, much uncertainty remains. As a consequence, the global maternal mortality ratio estimate is published only every five years, and in 2000, 40% of countries’ estimates were based on figures predicted by regression.2 Thus reliable assessment of maternal mortality trends is limited.
For monitoring, it is important to distinguish between corrected and predicted statistics.3 Corrected statistics use adjustments made for known biases. Predicted statistics use a set of assumptions about the association between other factors and the quantity of interest, such as maternal mortality, to fill gaps in the data over time or space (from one population with data to another with limited or no data). Predicted statistics are useful for planning, decision-making, advocacy for funds and research and development investments when corrected statistics are not available. But they are not suitable for monitoring progress on what works and what does not.3
Unfortunately, the MDG monitoring process relies heavily on predicted statistics.
The same applies to monitoring progress in major disease interventions. For example, the assessment of a recent change in measles mortality from vaccination is mostly based on statistics predicted from a set of covariates such as the number of live births, vaccine coverage, vaccine effectiveness and case-fatality ratios.4 It is understandable that estimating causes of death over time is a difficult task. However, that is no reason for us to avoid measuring it when we can also measure the quantity of interest directly;5 otherwise the global health community would continue to monitor progress on a spreadsheet with limited empirical basis. This is simply not acceptable.
This mismatch was created partly by the demand for more timely statistics (i.e. on an annual basis) from their users and partly by a lack of data and effective measurement strategies among statistics producers. Users must be realistic, as annual data on representative cause-specific mortality are difficult to obtain without complete civil registration or sample registration systems.
If such data are needed, the global health community must seek indicators that are valid, reliable and comparable, and must invest in data collection (e.g. adjusting facility-based data by using other representative data sources). Regardless of new disease-specific initiatives or the broader WHO Strategic Objectives, the key is to focus on a small set of relevant indicators for which well-defined strategies for monitoring progress are available. Only by doing so will the global health community be able to show what works and what fails. ■
- Schelling TC. Strategies of commitment and other essays. Cambridge: Harvard University Press; 2006.
- Maternal mortality in 2000: estimates developed by WHO, UNICEF and UNFPA. Geneva: WHO; 2004.
- CJ Murray. Towards good practice for health statistics: lessons from the Millennium Development Goal health indicators. Lancet 2007; 369: 862-73.
- LJ Wolfson, PM Strebel, M Gacic-Dobo, EJ Hoekstra, JW McFarland, BS Hersh. Has the 2005 measles mortality reduction goal been achieved? A natural history modelling study. Lancet 2007; 369: 191-200.
- M Adjuik, T Smith, S Clark, J Todd, A Garrib, Y Kinfu, et al. Cause-specific mortality rates in sub-Saharan Africa and Bangladesh. Bull World Health Organ 2006; 84: 181-8.
- Measurement and Health Information Systems, World Health Organization, 20 avenue Appia, 1211 Geneva 27, Switzerland.