Outcomes
Student Outcomes
Types of ISAT Studies
Effect Size Studies
Sidebar: What did these effect size studies tell us?
The “effect size” analyses complemented the momentum and ROC analyses by providing a standard, quantifiable metric. The "effect size” analyses introduced the concept of “Natural Growth” into the CMSI research lexicon. The concept was originally introduced by Hill et al. (2007) in their paper, “Empirical Benchmarks for Interpreting Effect Sizes in Research.” “Natural Growth ”involves expectations for growth or change in the absence of an intervention. The idea is that intervention effects (the intervention here being implementation of CMSI-supported instructional materials) are compared to a typical year of “natural growth” for the target populations (what changes in performance would be expected in the comparison group). cite
The “effect size” analyses differed in one way from the ROC and momentum analyses because the studies used scale scores as the outcome measure rather than %ME. The research applied three benchmarks to the results of the two-year CMSI K-5 mathematics interventions. cite These benchmarks were:
- Normative expectations for change
- Policy relevant performance gaps
- Similar intervention comparisons
This research used controls similar to those in the annual summary reports and %ME longitudinal research. Again, external variables were controlled by comparing individual cohorts to the comparison group of schools. In addition, the study separated data by cohort, program, grade, and implementation period. In examining individual cohort performances over time, initial starting points were controlled. However, demographic conditions (ethnicity, gender, school size, etc) were not controlled in either the cross-cohort or cohort-specific analyses.
By introducing the concept of "Natural Growth" to CMSI research, the effect size report addressed one of the shortcomings of the annual reports and the %ME longitudinal reports. It provided a useful standard against which the magnitude of the outcomes of CMSI program implementations-- what size of change, or effect--could be evaluated in a meaningful context. In other words, evaluators could compare the changes in performance by CMSI students with the average amount of "natural growth" found among similar students nationally.
The analysis technique utilized in this research is important because it demonstrated how small changes in performance, effect sizes, can have large, practical implications in different settings. The effect size research is also important because it demonstrated the need for effect sizes to be understood within the context in which they being used, as suggested in the literature (Lipsey, 1989). For the CPS population, an effect size of .20 would be quite large, since in the context of "natural growth" it would represent almost half a year's growth in performance over and above what might be expected of the average U.S. student.
Researchers controlled for student mobility by restricting the effect size analysis to students who took both the 3rd and 5th grade ISAT exams in the same school. This limited the study to students who could be exposed to CMSI implementation in all years of the study cite . Restricting the analysis to these students also helped control for possible effects of different local school cultures on students transferring schools between the two ISAT administrations (3rd to 5th). These individual "Same Student-Same School" analyses were only possible for students making the 3rd to 5th grade transition. (8th grade students were out of the system by the next testing period and most 3rd-grade transitions occurred prior to start of state testing of math in the 4th grade and 7th grades.)
The effect size research bridged the gap between the less stringently controlled annual reports and the longitudinal %ME analyses and the more rigorously controlled multivariate analyses used in the "Controlled Regression" studies discussed in the following section.