Student Outcomes

Types of ISAT Studies

Previous Page Next Page

Effect Size Studies

Sidebar: What did these effect size studies tell us?

The “effect size” analyses complemented the momentum and ROC analyses by providing a standard, quantifiable metric. The "effect size” analyses introduced the concept of “Natural Growth” into the CMSI research lexicon. The concept was originally introduced by Hill et al. (2007) in their paper, “Empirical Benchmarks for Interpreting Effect Sizes in Research.” “Natural Growth ”involves expectations for growth or change in the absence of an intervention. The idea is that intervention effects (the intervention here being implementation of CMSI-supported instructional materials) are compared to a typical year of “natural growth” for the target populations (what changes in performance would be expected in the comparison group). cite

The “effect size” analyses differed in one way from the ROC and momentum analyses because the studies used scale scores as the outcome measure rather than %ME. The research applied three benchmarks to the results of the two-year CMSI K-5 mathematics interventions. cite  These benchmarks were:

“Normative expectations for change” asked the question, “How does this performance compare to that which might be expected on average for students in this grade?” “Policy relevant performance gaps” asked, “How did the performance of one group compare to that of another?” “Similar intervention comparisons” asked, “How did the results of this intervention compare to those reported “on average ”by similar interventions?”

This research used controls similar to those in the annual summary reports and %ME longitudinal research. Again, external variables were controlled by comparing individual cohorts to the comparison group of schools. In addition, the study separated data by cohort, program, grade, and implementation period. In examining individual cohort performances over time, initial starting points were controlled. However, demographic conditions (ethnicity, gender, school size, etc) were not controlled in either the cross-cohort or cohort-specific analyses.

By introducing the concept of "Natural Growth" to CMSI research, the effect size report addressed one of the shortcomings of the annual reports and the %ME longitudinal reports. It provided a useful standard against which the magnitude of the outcomes of CMSI program implementations-- what size of change, or effect--could be evaluated in a meaningful context. In other words, evaluators could compare the changes in performance by CMSI students with the average amount of "natural growth" found among similar students nationally.

The analysis technique utilized in this research is important because it demonstrated how small changes in performance, effect sizes, can have large, practical implications in different settings. The effect size research is also important because it demonstrated the need for effect sizes to be understood within the context in which they being used, as suggested in the literature (Lipsey, 1989). For the CPS population, an effect size of .20 would be quite large, since in the context of "natural growth" it would represent almost half a year's growth in performance over and above what might be expected of the average U.S. student.

Researchers controlled for student mobility by restricting the effect size analysis to students who took both the 3rd and 5th grade ISAT exams in the same school. This limited the study to students who could be exposed to CMSI implementation in all years of the study cite . Restricting the analysis to these students also helped control for possible effects of different local school cultures on students transferring schools between the two ISAT administrations (3rd to 5th). These individual "Same Student-Same School" analyses were only possible for students making the 3rd to 5th grade transition. (8th grade students were out of the system by the next testing period and most 3rd-grade transitions occurred prior to start of state testing of math in the 4th grade and 7th grades.)

The effect size research bridged the gap between the less stringently controlled annual reports and the longitudinal %ME analyses and the more rigorously controlled multivariate analyses used in the "Controlled Regression" studies discussed in the following section.

Previous Page Next Page

Using K-12 national norming samples from seven major standardized tests in reading and six in math, Hill et al. (2007) measured annual growth in achievement by calculating the difference between the mean scale scores of adjacent groups and converting the difference to a standardized effect size by dividing it by the pooled standard deviation for the two adjacent groups. Their 3rd through 5th grade findings were of particular interest to our current study. Hill et al. (2007:3) reported the average annual mean effect size in math for the 3rd- 4th grade transition to be .52 SDs with a standard deviation of +/-.11. They found the 4th- 5th grade transition to be .56 SDs with a standard deviation of +/-.08. The concept of "Natural Growth" allowed us to compare effect sizes from different population groups along a single standard, namely the expected average "natural growth" of students at different grade levels.
This report does not include an 8th grade analysis. The 8th grade analysis was not conducted because the research was longitudinal and it was not possible to follow 8th graders into high school. Further, it was not possible to track program effects between the lower and middle grades because different programs were being implemented at the different grade levels (EDM & MTB vs. MTM & CMP).
It should be noted that this is an intent to treat study. We don't actually know whether these students were exposed to the CMSI supported curricula or not or at what level of implementation fidelity or quality. For these studies all we know is that they were at schools which self-report they are using the materials.