DATA COLLECTION METHODS
Health programmers use qualitative and quantitative research to monitor and evaluate their programs. The choice of whether to use quantitative or qualitative data collection techniques, or a mixed methods approach using both, depends on a number of factors, such as:
- The type of problem identified and outcome being assessed (some issues are best assessed with qualitative, some with quantitative)
- The theory guiding design
- The purpose of data collection. For example, is the assessment for formative research, monitoring or process evaluation, or impact evaluation?
- Quantitative Data Collection
- Qualitative Data Collection
- Questions Answered
- Formative Research
- Monitoring
- Impact
Quantitative data collection is numerical and is often used to count the variable of interest. Quantitative data collection tools used for monitoring include program logs, attendance sheets, and other tools that collect counts for referrals made, referrals completed, tools developed, products distributed, workshops conducted, broadcast, number of RDTs completed, number of ACTs prescribed etc. Quantitative data collection for evaluation purposes can be surveys, censuses and health records. Quantitative studies can be designed to be representative of the entire population of interest so that the findings are generalizable, and reflective of the entire population – not just those individuals included in the study. For malaria case management, for example, quantitative data might use a household survey to count the number of people who recall seeing or hearing a message about prompt care seeking for fever. Another example of a way to use of quantitative data would be to count how many people believe something or behave a certain way.
From an SBCC evaluation standpoint it is important to measure reach and exposure of materials and messages among the intended population. Once you have measured exposure, it is possible to measure changes in knowledge and behavior among those exposed or unexposed. It is also helpful to measure how many messages respondents recall. An effective SBCC campaign will show a dose response, or a strong correlation between the number of messages recalled and increases in knowledge, attitudes, and behavior. Note that when attempting to show a dose response between exposure to messages and increases in knowledge, attitudes, or behavior, it is important to control for ownership of commodities related to your messaging. For example, it would be inappropriate to compare exposure to radio messages among those who do and do not own radios.
Examples of Quantitative Data Sources
Rapid CATCH Survey: A Rapid Core Assessment Tool on Child Health (CATCH) is a standardized questionnaire to be used with young mothers/caretakers in community households. It covers a broad range of maternal and child health indicators, and can reveal specific household behaviors and care-seeking patterns critical to designing and evaluating interventions.
Dose Response Analysis: Caution

Examples of Qualitative Methods
Collection Technique | Description | Resources |
---|---|---|
Focus Group Discussions | A group discussion often using a semi-structured guide asking participants to share their opinions and experiences. This form of inquiry is helpful in drawing out social and cultural norms among a group of people who are similar in characteristics that are associated with the study interest. | Qualitative Research Methods: A Data Collector’s Field Guide |
In-depth Interviews | Asking questions of one person in a private setting in order to understand their perspective on a topic. This data collection technique is an appropriate way to explore a sensitive issue that respondents are not likely to speak about openly in a group setting. | Guide for Designing and Conducting In-Depth Interviews for Evaluation Input |
Key Informant Interviews | Key informant interviews are a way to gather first-hand knowledge about a topic from an individual who is deemed an authority on the topic. | Conducting Key Informant Interviews |
Participant Observation | Participant observation is a way to collect information about people while spending time in their presence. Participant observation is unique in that is allows researchers to observe verbal as well as non-verbal communication as well as interactions between individuals. | Participant Observation as a Data Collection Method |
Most Significant Change | Collect a series of stories about change and systematically selecting the most significant example. | The MSC Technique: A Guide to its Use |
National/Sub-National Level
National/Sub-national level | Quantitative and qualitative tools | Types of questions answered |
---|---|---|
National: Multiple channel mass media (e.g. radio, television, billboards) |
|
|
Subnational: Multimedia, multi messages based on barrier analysis, community groups |
|
|
Subnational: Training of mother and child health community volunteers in health promotion |
|
|
Subnational: Care Groups |
|
|
Community/Health Center Level
Community/Health Center Level | Quantitative and Qualitative Tools | Types of questions answered |
---|---|---|
Community, health center: IEC, mass media, counseling |
|
|
Community: Mass media, posters, distribution of messaging materials, message dissemination through road-shows and established community-based organizations |
|
|
Health Center: In-service training, Job aids |
|
|
Health facility, community: SMS/Mobile messaging |
|
|
In order to improve the design of SBCC interventions in malaria case management, formative research program design is critical. The results from the formative research are then used to design a more effective SBCC program. Conducting formative research includes reviewing existing information through literature reviews or secondary data analysis as well as collecting and analyzing qualitative and/or quantitative data to better understand the audience and proper framing of the SBCC messaging. The goals of formative research can include:
- Identify previous SBCC programs that have been designed for similar audiences for similar issues and learn from them
- Understand the media habits of the audience and identify what channels they have access to, use, and trust
- Identify the behaviors, perceptions, and information to promote (For example, what do caregivers of children under 5 years of age in the target area know, think, and feel regarding prompt care seeking for febrile children?)
- Identify the factors that hinder or motivate the behavior
Adapted from Evidence-based Malaria SBCC: From Theory to Program Evaluation
Routine monitoring, including process evaluation, is used to track program activity progress towards expected goals. It will identify areas of excellence and deficiency which should then inform midcourse corrections. Preliminary successes have the added benefit of boosting morale and the commitment of program staff. Finally, SBCC monitoring can help inform future programs.
For the M&E plan, it is helpful to describe the various sources of monitoring data and how often these data will be collected. It is important that the monitoring plan includes plans to monitor both activities and audiences. It is also important to describe how the data will flow up the system, where it will be stored, and who is responsible for it.
Activity Monitoring
- Activity report forms provide information on trainings and community mobilization activities to track how many activities were conducted and how many people participated. The SBCC program needs to create a system for collecting these forms regularly from implementers and checking to ensure they are filled out correctly. Mobile reporting, supervision visits, and data review meetings can bolster these channels.
- Media monitoring reports are created by third-party agencies who track which radio or TV materials are being aired, at what time, and how often. This allows the program to negotiate “make goods” or airings to make up for under-broadcasting. When media monitoring services are not available, broadcast logs can be requested from stations. Station logs can be verified by having community based listeners also listen to and log the dates and times of broadcasts.
- Passive or recorded observations, checklists, mystery clients, client exit interviews, and record reviews may be useful to track service provider behaviors
- Community-based household surveys, SMS questionnaires, and omnibus surveys track changes in knowledge and reach at the community level
Audience Monitoring
- Health management (HMIS)/logistics management information systems (LMIS) can be helpful for tracking service utilization, sales, and stock-outs. Since they are often used and managed by other parties - the SBCC program may need to invest funds into working with these parties to improve this system. Such activities may include providing modems for data uploads, training staff on using forms correctly, and conducting data quality meetings.
Adapted from: A Guide to Developing M&E Plans for Malaria SBCC Programs
Impact, or outcome, evaluation takes place after the activity is finished, however, it must be planned for at the beginning of the project as it is most useful when the data are compared to a baseline survey conducted prior to program implementation.
An important concept to consider when thinking about impact evaluation is the difference between correlation and causation. Causation is the measurement of an input of interest, for example – a program activity, that occurs in time before the outcome of interest, for example – prompt care seeking activity for children under 5 by caregivers. In contrast, correlation assesses the exposure of the project activity input at the same time as the desired outcome behavior. As a result, if two variables are correlated this does not necessarily mean that the input has caused the behavior output. There are three main types of impact evaluation research designs: experimental, quasi-experimental, and non-experimental. Only with the experimental design can the team assess causation.
Impact evaluations for SBCC need to be designed to answer three questions: a) Was the program effective? B) Did it change behavior? and c) How did it work? SBCC program messages influence behaviors indirectly through knowledge, attitudes, and beliefs that drive behavioral decisions. Understanding the specific attitudes through which messages affected behavior is important since this helps take the lessons from a successful program and apply them elsewhere.
One way to establish a link between exposure and behavior is to use self-reported exposure to SBCC messages in household surveys to construct the groups of exposed and unexposed individuals. In this approach, a series of questions in a household survey are constructed to ask each respondent about their exposure to SBCC messages and to specific program elements such as logos and slogans.
A brief description of three evaluation designs can be found below:
- Expensive
- Can be unethical
- Difficult to implement
- Longitudinal Study: repeated measure on same study participants over time
- Pretest, Post-test: assess changes in scores for one group
- Post-test: compare results between those exposed to SBCC intervention and those not exposed
- Ethical
- Not difficult to implement
- Can assess change over time
- Difficult to ensure causality is measured
- Expensive
- Time-intensive
- Lost to Follow-up a concern
- Limited generalizability
- Fast
- Less expensive
Cons:
- Differences observed between groups may be due to differences between study samples and not exposure to SBCC intervention
- Bias due to non-response
It is important for programmers to be aware of potential pitfalls of any research design and to take these into account when designing and drawing conclusions from the M&E outcomes. The Research Methods Knowledge Base provides a comprehensive overview of these methods.
Adapted from: A Guide to Developing M&E Plans for Malaria SBCC Programs and Evidence-based Malaria SBCC: From Theory to Program Evaluation
How-To Guide Guidance on how to develop a monitoring and evaluation plan