Step 6: Plan for Monitoring and Evaluation
Examples of monitor and evaluation indicators for each of the commodities are provided below as illustrative examples. These examples should be adapted to the country context.
Monitoring and evaluation (M&E) is a critical piece of any program activity because it provides data on the program’s progress towards achieving set goals and objectives.
Although planning for M&E should be included in the communication strategy, avoid developing a complete monitoring plan at the time of strategy development (indicators, sample, tools, who will monitor, frequency of data collection, etc.). At the time of strategy development, focus on the indicators that should be incorporated into the program's plan. M&E indicators should be developed based on formative research and should indicate whether the key messages and strategies are having the desired effect on the intended audience.
A full M&E plan should then be developed as a separate program document. Developing an M&E plan should outline what M&E indicators to track, how and when data will be collected, and what will happen to the data once it has been analyzed. A variety of data sources can be used to collect M&E data. It is important to assess the scope and context of the program to choose the most applicable methodology, as M&E activities vary in cost, staff, and technology requirements. While some lower-cost M&E options will allow for identification of trends in demand for services, they may not be able to provide additional insight into the causal effects of activities and the way in which the program worked. To measure cause and effect, larger program-specific data collection activities geared towards evaluation are needed. See below for examples of low and high cost options.
While the collection of M&E data tends to receive the most attention, it is also critical to have a process for analysis and review of the collected data. M&E data should be used to inform program changes and new program development. It is best to build these M&E review processes into existing program management activities to allow for regular dissemination of M&E indicators.
Data Sources/System
Low cost option
The low cost option will make use of existing data sources and opportunities to gain insight into the program and its associations with changes in demand or uptake of contraceptive implants, female condoms, and emergency contraceptive pills. However, it will only allow for the identification of trends and will not allow for the attribution of change to a given program or to program activities. Illustrative data sources for a low cost option include:
- Formative research for key messages, positioning, development of materials and media choice (focus groups with intended audiences and in-depth interviews with members of primary and influencing audiences)
- Evaluation of communication campaigns (focus groups with intended audiences; in-depth interviews with primary and influencing audience members; adding questions to omnibus surveys on campaigns, messages and activities)
- Service statistics (Information from clinics and providers such as referral cards and attendance sheets)
- Communication channel statistics (Information from television or radio stations on listenership of mass media activities)
- Omnibus surveys (Addition of questions related to program exposure and impact to omnibus surveys)
- Provider self reported data (Small scale surveys among providers about services rendered and prescription practices; small-scale retail audits among pharmacies and rural drug shops on medicines requested and offered)
- Demographic and Health surveys (Trends in family planning and fertility approximately every five years)
High cost option
The high cost option will make use of representative program-specific surveys and other data collection methods to gain considerable insight into the effects of the program and the way in which it worked. Illustrative data sources for a high cost option include:
- Formative research for key messages, positioning, development of materials and media choice (focus groups; in-depth interviews; photo narrative or observation with families or inside clinics, pharmacies, or with CHWs to observe and record)
- Service statistics (Information from clinics and providers such as referral cards and attendance sheets)
- Communication channel statistics (Information from television or radio stations on listenership of mass media activities)
- Provider self-reported data (about services rendered, product and sales audits among wholesalers and government procurement agencies; retail audits at pharmacies and drug shops to check medicines requested and rendered)
- Large, nationally representative program-specific surveys (focus on issues related to knowledge, perceptions, acceptability and use) – may include baseline survey, follow-up and endline to measure changes and outcomes
- Client exit interviews (to assess whether client counseling took place, whether clients were offered a range of contraceptive methods, and user satisfaction with services delivered including their perceptions, experience and intentions)
M&E indicators
M&E indicators should include process, output, outcome and impact indicators:
Process indicators: Measure the extent to which demand creation activities were implemented as planned.
Program Output Indicators: Measure (a) changes in audiences’ opportunity, ability and motivation to use the commodity, and (b) the extent to which these changes correlate with program exposure
Behavioral Outcome Indicators: Measure (a) changes in audiences’ behavior, and (b) the extent to which these changes correlate with program exposure
Health Impact Indicators: Measure changes in health outcomes
To increase the utility of M&E data, indicators should be disaggregated to facilitate more in-depth analysis of program performance. It is recommended that indicators are disaggregated by, for example, gender, geographic location, type of provider etc.
Common biases that programmers should be aware of when designing, implementing and interpreting M&E include:
- Self-selection bias – for example, a current family planning user may be more interested and willing to answer a survey about family planning compared to someone who does not approve of it or who has never tried family planning before.
- Social desirability bias – following exposure to health promotion initiatives, intended audiences may feel pressured to give ‘right answers’ to survey questions, e.g. to report positive attitudes towards a commodity even though they do not really feel that way. As demand generation interventions are successful at shaping positive social norms, social desirability bias may become more of a challenge in M&E.
Illustrative Examples of M&E indicators for Family Planning Commodities
Illustrative examples of M&E indicators are available below for contraceptive implants, female condoms, and emergency contraception. These should be adapted to the country context. For more information on audience segments and profiles, refer to the additional resources provided.
- M&E indicators for Contraceptive Implants
- M&E indicators for Emergency Contraception
- M&E indicators for Female Condom
By clicking on the links above, you can view these examples by step either as a preview (which does not require download) or download in MS Word or PDF. A full version of each commodity strategy is also available under “Adaptable Strategies” in the right sidebar in MS Word or PDF formats. The full strategy includes both guidance and illustrative content for the entire strategy.
About the Life-Saving Commodities in Family Planning
Female Condom Emergency Contraception Contraceptive Implants