Outcomes

Student Outcomes

Types of ISAT Studies

Previous Page Next Page

CMSI %ME Longitudinal Studies Report

Sidebar: What did these longitudinal studies tell us?

This study complemented the annual CMSI Research Summaries by focusing on changes in the %ME over the period 2001-2008. The underlying research question in this report was “How did CMSI schools perform historically as compared to a Comparison group of schools that as of 2007 had not been exposed to the CMSI curricula intervention?” The study introduced two new metrics to the %ME outcome measure – “momentum” and “rate of change” (ROC). In this context, “momentum” referred to change from a fixed starting point (e.g., mean ISAT score at an initial point, referred to as Time 1 or T1). ROC, on the other hand, referred to proportionate change from a fixed starting point.

External factors were controlled for statistically by comparing individual cohorts to the group of comparison schools, and breaking the data down by cohort, program, grade, and initial score (%MET1). The results of this study are important for two reasons.

First, they were the first long-term outcome findings focused on the relationship between CMSI materials implementation and its impact on the %ME state standards. Second, the results indicated that what conclusions that can be drawn depend on the outcome variable utilized. The results were unambiguous in at least one respect. They demonstrated that when looking at “momentum” or “rates of change,” all of the CMSI cohorts showed performance patterns (in terms of changes in the %ME) that were at least as good if not better than the performance patterns observed among the corresponding comparison groups. More plainly, these studies suggested that students in schools that were implementing CMSI performed as well, if not better than, students in schools that were not implementing CMSI.

The annual findings for the %ME analyses had the same shortcoming as the annual reports. Namely, the fact that cohort groups and comparison groups differed year to year made conclusions based on single-year student performance problematic. That is, the possibility that changes in annual performance were a function of the students comprising each group and/or the changing composition of schools in the comparison group and/or the ISAT test itself cite  could not be ruled out. Like the annual reports, these longitudinal analyses were cross-sectional; they were based on populations of students in the different cohorts and grades at different points in time.

The ROC findings, along with the results of the momentum analyses, indicated that when the initial starting score was taken into account, all of the CMSI cohorts posted better performances than the comparison group over the period covering their initial CMSI implementation year, for cohorts starting implementation in 2003-2004 through 2007-2008. While the longitudinal analyses avoided the problem of group variance associated with cross-sectional analyses based on comparisons of annual figures, one shortcoming was their inability to indicate how the groups performed during the actual CMSI 2-year implementation period cite  and their failure to provide a convenient standard against which meaningful implications could be drawn.

Previous Page Next Page

There are several versions of the ISAT exam. While in any given year the version of the ISAT used impacts all students taking the exam equally, longitudinal assessments must confront the problem that the version of the test used in one year may be different from that used the next. Hence, like the composition of the student population and the students themselves, the content of the test itself is also not exactly the same each year, though ISBE uses equating and form development practices that are standard in the large-scale assessment field to attempt to minimize these year-to-year differences.
There are several versions of the ISAT exam. While in any given year the version of the ISAT used impacts all students taking the exam equally, longitudinal assessments must confront the problem that the version of the test used in one year may be different from that used the next. Hence, like the composition of the student population and the students themselves, the content of the test itself is also not exactly the same each year, though ISBE uses equating and form development practices that are standard in the large-scale assessment field to attempt to minimize these year-to-year differences.