Student Outcomes

Types of ISAT Studies

Previous Page Next Page

CMSI Research Summaries

Sidebar: What did these annual research studies tell us?

The Office of Research, Evaluation, and Accountability (REA) conducted annual CMSI analyses, “CMSI Research Summaries.” These studies began in 2005 and focused on changes in ISAT scale scores in schools implementing at least one of the four CMSI math instructional programs – Everyday Math (Grades 3-5), Math Trailblazers (Grades 3-5), Connected Math Project (Grades 6-8) and Math Thematics (Grades 6-8).

The annual research summaries broke down ISAT scale scores by year, subject, instructional material, grade level, and cohort (a group of schools who began implementation of CMSI supported curricula in the same school year). Performance was assessed as both a categorical outcome (%ME) and as a continuous variable (ISAT scale scores). In other words, the reports presented the percentages of students meeting or exceeding standards (%ME) each year and compared ISAT scale scores across groupings.

These reports controlled for the effects of cohort, grade level, and implementation status. Also, CMSI cohorts were analyzed separately because the conditions affecting school selection for each cohort were different, and these different selection processes made cross-cohort comparisons problematic. For example, schools in early cohorts self-selected using the curriculum, while later cohorts were mandated to use. For another example, schools in some cohorts received assistance (in terms of funds or people) to support the implementation (e.g. buy textbooks, attend PD, or have a school-based specialist) while later cohorts did not. Hence, each cohort’s performance was compared to that of a static “Comparison” group composed of all schools that had never, to that date, implemented the CMSI mathematics program.

It is important to note that comparison group did not remain constant over time. In any given year, the comparison group was composed of those schools that had not been involved in a CMSI implementation. This meant the population of schools in the comparison group changed each year as, from one year to the next, non-participating schools joined successive cohorts of implementing schools.

The research questions addressed in the annual reports focused on how each cohort’s average ISAT performance differed from the performance of the comparison schools in that year. The annual reports compared the current year to previous years’ performances for each cohort and examined how each cohort compared to changes in the comparison group. The reports summarized mean changes in ISAT scores and asked,

The results of the annual reports indicated variance in performance between cohort and comparison groups. However, making conclusions based on performance from a single year was problematic. Annual performance changes may have had less to do with actual changes in performance than they did with changes in the students comprising each group and/or the changing composition of the comparison group and/or the ISAT test itself. Further, the analyses in the annual reports were cross-sectional. This means they were based on populations of students in the different cohorts and grades at different points in time. In other words, comparisons were based on the performance of students in the same grade from year to year (e.g., 3rd graders in 2001 vs. 3rd graders in 2002) rather than on the performance of the same students from year to year (e.g., 3rd graders in 2001 and their subsequent performance as 4th graders in 2002).

Previous Page Next Page