This brief discusses programmatic and contextual factors to consider when choosing an evaluation design.
Recommended steps include defining your evaluation objectives, defining the parameters of your evaluation, formulating key outputs and outcomes, and reviewing and selecting an evaluation design and research methods.
The brief reviews common evaluation designs:
- Experimental or random assignment designs
- Waitlist/overflow designs
- Matched case designs
- Propensity score matching
- Comparison site designs
- Time series
- Pre-post tests
- Case studies
The brief also provides examples of process and outcome evaluation questions and quantitative and qualitative research methods. Finally, it critiques common myths and misconceptions.