Use and abuse of rapid monitoring to assess coverage during mass vaccination campaigns
Elizabeth T Luman a, K Lisa Cairns a, Robert Perry a, Vance Dietz a, David Gittelman a
In recent years, vaccination campaigns targeting a wide age-range of children have been part of global strategies to eradicate polio1 and reduce measles mortality.2 Achieving uniformly high coverage in the target area is critical to reaching herd immunity and disease control goals, and real-time monitoring allows rapid targeting of additional activities to areas with inadequate coverage. Yet monitoring coverage during a campaign using administrative data (i.e. dividing the number of individual vaccine doses administered by the target population) is notoriously unreliable, with estimates commonly reaching > 100%.3,4 Incomplete tallying or reporting of the number of doses administered can bias the results, as can poorly documented shifts in population, reliance on outdated census data and vaccination of individuals outside the targeted age group or geographic area. Probability-based surveys avoid these problems. However, such surveys are often expensive and time-consuming, require statistical expertise to plan and do not provide immediate results, precluding corrective action during the campaign.
In the late 1990s, the Pan American Health Organization (PAHO) developed a supervisory assessment tool for use during and after mass measles vaccination campaigns, and for monitoring routine services.5 The Rapid Coverage Monitoring strategy was designed to provide local authorities with “a quick impression of the completeness of vaccination”.6 While formalized during the measles initiative, the concept of rapid monitoring had been used in the Americas in the mid-to-late 1980s during the regional polio eradication effort (Jean Marc Olive, PAHO, personal communication). During campaigns, supervisors collaborate with local workers to identify neighbourhoods at greatest risk of poor coverage and conduct convenience household surveys in each to identify 20 children in the target age group. If more than one unvaccinated child is found, teams revisit the area and vaccinate the children who were missed. The results from multiple exercises can be compiled, and the percentage of areas that required revaccination can give an indication of the campaign’s success and the quality of social mobilization. However, rapid monitoring does not use a probability sample of the population and thus will not produce statistically valid estimates of vaccination coverage. Accordingly, PAHO recommends that it be used only as an “efficient method for validating coverage”.7
In the past five years, the use of national and sub-national vaccination campaigns in sub-Saharan Africa and south-east Asia,1,2,8 accompanied by PAHO-style rapid monitoring, has expanded. WHO’s Regional Office for Africa encourages the use of rapid monitoring during vaccination campaigns as a “programmatic tool for local managers to identify poorly performing areas for immediate action”.9 However, we have observed that these activities are often referred to as “Rapid Convenience Surveys”, and results are frequently pooled and presented incorrectly as actual vaccination coverage estimates. This practice could lead to erroneous conclusions about population immunity and poor decisions on future vaccination activities. At a broader level, it undercuts good public health practice by leading health workers to misunderstand the need for, execution of and interpretation of probability-based surveys.
While recognizing the benefits of rapid monitoring, we stress the importance of restricting its use to that of a programmatic strategy during both campaigns and routine service supervision. We suggest that regional and national managers reinforce the use of rapid monitoring as a supervisory tool, that it not be referred to as a “survey”, and that the results be used solely to improve operational performance and not as an estimate of vaccination coverage. ■
- Centers for Disease Control and Prevention. Progress toward interruption of wild poliovirus transmission – worldwide, January 2005–March 2006. MMWR Morb Mortal Wkly Rep 2006; 55: 458-62.
- LJ Wolfson, PM Strebel, M Gacic-Dobo, EJ Hoekstra, JW McFarland, BS Hersh. Has the 2005 measles mortality reduction goal been achieved? A natural history modelling study. Lancet 2007; 369: 191-200.
- PL Zuber, KR Yameogo, A Yameogo, MW Otten. Use of administrative data to estimate mass vaccination campaign coverage, Burkina Faso, 1999. J Infect Dis 2003; 187: S86-90.
- GD Huhn, J Brown, W Perea, A Berthe, H Otero, G LiBeau, et al., et al. Vaccination coverage survey versus administrative data in the assessment of mass yellow fever immunization in internally displaced persons – Liberia, 2004. Vaccine 2006; 24: 730-7.
- H Izurieta, L Venczel, V Dietz, G Tambini, O Barrezueta, P Carrasco, et al., et al. Monitoring measles eradication in the region of the Americas: critical activities and tools. J Infect Dis 2003; 187: S133-9.
- V Dietz, L Venczel, H Izurieta, G Stroh, ER Zell, E Monterroso, et al., et al. Assessing and monitoring vaccination coverage levels: lessons from the Americas. Pan Am J Public Health 2004; 16: 432-42.
- Pan American Health Organization, Expanded Program on Immunization in the Americas.. The use of rapid coverage monitoring: the vaccination campaign against measles and rubella in Ecuador. EPI Newsl 2003; 25: 1-3.
- World Health Organization. Immunization Surveillance Assessment and Monitoring. Supplementary Immunization Activities Calendar. Available at: http://www.who.int/immunization_monitoring/en/globalsummary/siacalendar/padvancedsia.cfm
- World Health Organization, Regional Office for Africa. (WHO AFRO). Evaluation guidelines for measles supplemental immunization activities. WHO AFRO, 2006. Available at: http://www.afro.who.int/measles/guidelines/
- Centers for Disease Control and Prevention, Atlanta, GA, 30333, United States of America.