Category 6: Monitoring and evaluation
Conducting monitoring and evaluation
Both monitoring and evaluation require a means of assessing the changes or results produced as a result of an intervention. It is common to develop indicators for this purpose, i.e. variables such as attendance rates, the proportion of people with knowledge of a certain subject, and infection rates, which can be tracked over time to define and measure change. The development of good indicators requires clarity about the purposes of interventions. Indicators should therefore be directly related to the goal, aims, objectives and activities set out in intervention planning documents.
The development of good indicators requires clarity about the purposes of interventions. Indicators should therefore be directly related to the goal, aims, objectives and activities set out in intervention planning documents.
Key issues in indicator development
Some aspects of a sex work intervention can be easily identified and counted, such as the numbers of condoms distributed or numbers of patients attending a clinic or the number of sex workers with STI symptoms.
The precise measurement of change in disease rates usually requires technical assistance.
Other information however, such as information on the enabling environment or on empowerment, may be highly descriptive or qualitative. It cannot be easily counted but needs to be documented through regular discussions and reviews of key events, opinions and perceived changes. In particular, psychological, political and socioeconomic impacts can be comparatively difficult to analyse as they involve complex concepts and processes. It may be possible to develop quantitative proxy indicators that are thought to be associated with a complex concept, for example a proxy indicator of empowerment might be the freedom to leave a brothel, the freedom to vote or the ability to refuse unsafe sex with a client.
Design of monitoring and evaluation systems
Monitoring and evaluation should include all those activities associated with or affected by the intervention in question.
Monitoring can involve specially designed activities, such as regular meetings with sex workers or project staff in order to assess their opinions on project activities or to discuss the analysis of field notes. Other monitoring activities can be incorporated into routine project recording systems that are collated and analysed at regular intervals, such as records of condom distribution or patient numbers.
Decisions have to be taken at the beginning of an intervention about the rigour with which a project is to be evaluated. If, for example, the intervention is taking place at a pilot or demonstration site, it may be necessary to establish statistical associations between intervention inputs and outputs or to compare one intervention site with another or with a control group. Such evaluations usually require outside technical assistance. For most interventions, however, it is usually enough to demonstrate that changes are occurring in key areas of interest.
Both monitoring and evaluation can use a variety of quantitative, qualitative and participatory methods. Quantitative measurement enables easy comparison of changes over time, and qualitative methods are useful for obtaining insights into community perceptions and processes of change.
Simple participatory tools can be developed to facilitate community involvement even if the participants are not literate, e.g. the use of maps, beads, charts, pictures or colour codes.
It is important to define how much information is needed and to determine the local capacity for analysis. The accumulation of too many data or the collection of data that are too complicated for analysis at the local level are common mistakes.