Your evaluator will discuss with you whether there were any problems with the way the data were collected and whether these problems affect how your findings should be interpreted.

In impact evaluations, the following common problem may occur when members of either your treatment or control/comparison group drop out over time. This is called sample attrition. Any sample attrition reduces sample size, which weakens the evaluation's ability to detect program effects that may truly be there. In addition, attrition typically is not completely random: "Higher-risk" participants are typically more likely to drop out, leaving an over-representation of lower-risk participants in follow-up data. For example, assume that findings from pre-/post-surveys show statistically significant increases in ability to communicate effectively between partners. Does this mean the program worked? It may have, but the findings could also reflect attrition of higher risk participants who may have had higher conflict relationships. Findings are then overly positive, when in fact, little or none of the positive change may be due to the program.

For this reason, it is critical that your evaluator employ data collection efforts aimed at obtaining high completion rates from all participants. It is also important that your evaluator examine whether data are randomly or systematically missing and employ statistical techniques to adjust as best they can for any bias resulting from missing data. And if-despite these efforts-there is systematic attrition, you need to understand the nature of the bias it introduces and interpret findings accordingly.

Other Resources