To be a good consumer of evaluation findings, it is important to understand some key concepts and terminology that your evaluator will be using. Below are definitions of important terms. For a glossary of key program evaluation terms, click here to access the glossary of terms from The Program Manager’s Guide to Evaluation (a government sponsored publication).

  1. Baseline Data: initial information on program participants or other program aspects collected prior to receipt of services or program intervention. Baseline data are often gathered through intake interviews and observations and are used later for comparing measures that determine changes in your participants, program, or environment.
  2. Bias: anything that skews the ability to truly measure the effects of a program or outcome
  3. Database: an accumulation of information that has been systematically organized for easy access and analysis. Databases typically are computerized.
  4. Descriptive Analyses: describe the "what"-the program's context and history, evolution, and current operations regarding program inputs, program activities, program outputs, and participants' immediate outcomes
  5. Dependent Variable: a variable in an experiment or study (e.g., a test score) whose changes are determined by the presence or degree of one or more independent variables. In most evaluations, outcome variables are dependent variables.
  6. Effect Size: represents the magnitude of the difference between two groups or the strength of the relationship between two variables
  7. Explanatory Analyses: examine associations between and among variables, seeking to address the "why" and "how" behind the "what"
  8. Independent Variable: a variable in an experiment whose presence or degree determines the change in the dependent variable. If the dependent variable is a test score, one of the independent variables could be time spent studying for the test.
  9. Management Information System (MIS): an information collection and analysis system, usually computerized, that facilitates access to program and participant information. It is usually designed and used for administrative purposes. The types of information typically included in a MIS are service delivery measures, such as session, contacts, or referrals; staff caseloads; client socio-demographic information; client status; and treatment outcomes. Many MIS can be adapted to meet evaluation requirements.
  10. Odds Ratio: the ratio of the odds of an event occurring in one group to the odds of it occurring in another group
  11. Qualitative Data: information that is difficult to measure, count, or express in numerical terms. For example, a participant's impression about the fairness of a program rule/requirement is qualitative data.
  12. Quantitative Data: information that can be expressed in numerical terms, counted or compared on a scale. For example, improvement in a child's reading level as measured by a reading test.
  13. Regression Analysis: a statistical technique used to predict an outcome from one or more variables
  14. Standard Deviation: measures the degree to which individual values vary from the mean (or average). A high standard deviation means that the responses vary greatly from the mean.
  15. Statistical Significance: the finding is due to a relationship between specific factors and did not occur by chance
  16. T test: a statistical test that determines whether the means of two groups are statistically different from each other
  17. Validity: The extent to which a measurement instrument or test accurately measures what it is supposed to measure. For example, a reading test is a valid measure of reading skills, but is not a valid measure of total language competency.

> Go back to the beginning of Stage 4

 

Other Resources