The data analysis stage of the Implementation Phase includes all of the steps necessary to make meaning and draw conclusions from cleaned, transformed and/or converted data.
Depending on the evaluation questions guiding your work, analysis may take one of several different forms. If your evaluation questions relate to a) assessing the need for a program, b) assessing the plausibility of the underlying causal theory of a program, or c) assessing the quality of implementation of a program, analysis will involve comparing the collected and organized data with some previously determined standard. If your evaluation questions relate to d) understanding the experience or developmental process of individual participants engaged in the program, analysis will involve an in-depth description of the mechanics of such processes. If your evaluation questions relate to e) assessing the average effect of the program on participants, analysis will involve showing patterns of association or a causal relationship between the program and participant change. Showing a causal relationship requires the elimination of competing explanations, including the possible explanation that an observed relationship between program participation and participant change happened by chance. Both quantitative and qualitative methods are useful for all five kinds of analysis. Often, they can be used either in sequence or in parallel for a more complete answer to evaluation questions. When a causal relationship between program participation and participant change is (or is not!) found, qualitative methods are useful for understanding why and how.
The section that follows provides guidance for analysis where data is quantitative and the evaluation question is about the average effect of the program on participants.