As with any SBCC effort, continuous monitoring allows programs to gauge progress, identify challenges, report to donors and other stakeholders, and make necessary course corrections to maximize positive impact. Regular monitoring of integrated SBCC programs will help alert programs to the need for adjustments in messaging, channels, emphasis, and aspects of the communication strategy. Integrated programs should monitor critical indicators against targets for all of the key topics of focus. Closely monitoring indicators for any problems or negative outcomes will help make the case to stakeholders that integration is worth pursuing, especially for technical areas that are less eager to advance integration. Keeping an eye out for adverse outcomes and being open to making adjustments to the approach is critical. Integrated programs can use adaptive management processes and realtime monitoring to identify and cope with unexpected outcomes. Through realtime monitoring, integrated programs collect data regularly in a format that is quickly available, efficiently process the data so it is digestible and usable, and set up systems for reviewing and using the data to make decisions. This regular data collection, processing, sharing, and use enables adaptive management – the process for coping with the uncertainty of implementing integrated programs by revisiting and revising monitoring models as the program progresses.
Integrated SBCC programs are likely candidates for complexity-aware monitoring, an approach that is meant to track the unpredictable. It can be used alongside performance monitoring and is especially relevant when cause and effect relationships are not well understood. Promising practices for complexity-aware monitoring include sentinel indicators, stakeholder feedback, process monitoring of impacts, most significant change, and outcome harvesting. See USAID’s Discussion Note on Complexity-Aware Monitoring for more information. (Source: https://usaidlearninglab.org/library/complexity-aware-monitoring-discussion-note-brief)
Measuring Integration Performance
In order to build the evidence base for integrated SBCC, programs must identify integration-specific performance indicators that will say something about how and how well integration is working.
In their Guidance for Evaluating Integrated Global Development Programs, FHI360 categorizes performance indicators as sector-specific indicators or value-added indicators.
Sector-specific indicators are the standardized indicators either required or recommended for programs to collect for example, unmet need indicators for family planning and indicators related to exclusive breastfeeding for newborn health. Each sector or donor may have its own distinct variations of indicators, which can complicate data collection or comparison. For example, indicators that describe the same outcome may vary in terms of the timeframe they refer to (e.g. within the last 3 months, 6 months, 1 year), the age range of the respondents (e.g. defining adolescents as 15-19, 15-24, or 18-24 years), or other demographic characteristics of respondents (e.g. modern contraceptive prevalence rate (mCPR) using married women, unmarried women, or all women as the denominator).
Value-added indicators measure effects beyond what would have occurred in a vertical program. They can be quantified both in terms of amplified effects (e.g. reaching more people, achieving greater ease or use of the program) and in terms of synergy (e.g. reaching new population groups).
(Source: FHI360’s Guidance for Evaluating Integrated Global Development Programs)
Develop indicators that are most appropriate for measuring the performance of complex, integrated SBCC programs. Depending on the goal of your program, your indicators might measure:
Data Collection Tools and Processes
As part of your RM&E plan, you will also need to determine how to collect relevant data for each indicator. As with vertical programs, some general principles for data collection include building data collection into larger health information systems, including electronic medical records, and building on what already exists. For integrated SBCC programs, consider the following when developing your data collection tools and processes.
Measurements of inter-institutional collaboration and networking can be useful indicators for integrated programs with a goal of collaboration. These indicators should systematically measure the kinds of interactions organizations may have with each other, including the personnel that attended, as well as the types of activities they did together (e.g. planning, budgeting). A capacity assessment in Ethiopia found that things identified activities that strengthen competencies, including: joint training, information and experience exchange, standardization of processes and tools, utilization of research for decision making, mentorship, financial planning, and engagement by leadership. (Source: HC3 Ethiopia capacity assessment)