Outcomes
Student Outcomes
Types of ISAT Studies
CMSI Research Summaries
Sidebar: What did these annual research studies tell us?
The Office of Research, Evaluation, and Accountability (REA) conducted annual CMSI analyses, “CMSI Research Summaries.” These studies began in 2005 and focused on changes in ISAT scale scores in schools implementing at least one of the four CMSI math instructional programs – Everyday Math (Grades 3-5), Math Trailblazers (Grades 3-5), Connected Math Project (Grades 6-8) and Math Thematics (Grades 6-8).
The annual research summaries broke down ISAT scale scores by year, subject, instructional material, grade level, and cohort (a group of schools who began implementation of CMSI supported curricula in the same school year). Performance was assessed as both a categorical outcome (%ME) and as a continuous variable (ISAT scale scores). In other words, the reports presented the percentages of students meeting or exceeding standards (%ME) each year and compared ISAT scale scores across groupings.
These reports controlled for the effects of cohort, grade level, and implementation status. Also, CMSI cohorts were analyzed separately because the conditions affecting school selection for each cohort were different, and these different selection processes made cross-cohort comparisons problematic. For example, schools in early cohorts self-selected using the curriculum, while later cohorts were mandated to use. For another example, schools in some cohorts received assistance (in terms of funds or people) to support the implementation (e.g. buy textbooks, attend PD, or have a school-based specialist) while later cohorts did not. Hence, each cohort’s performance was compared to that of a static “Comparison” group composed of all schools that had never, to that date, implemented the CMSI mathematics program.
It is important to note that comparison group did not remain constant over time. In any given year, the comparison group was composed of those schools that had not been involved in a CMSI implementation. This meant the population of schools in the comparison group changed each year as, from one year to the next, non-participating schools joined successive cohorts of implementing schools.
The research questions addressed in the annual reports focused on how each cohort’s average ISAT performance differed from the performance of the comparison schools in that year. The annual reports compared the current year to previous years’ performances for each cohort and examined how each cohort compared to changes in the comparison group. The reports summarized mean changes in ISAT scores and asked,
- “How did the average change in ISAT scores in each cohort compare to the same grade “Comparison” group performance?”
- “How do average changes in ISAT scores for each curriculum-specific cohort compare to the same grade “Comparison” group performance?”