Student Assessment and Program Evaluation

Context

Role of Program Evaluation

Previous Page Next Page

The Chicago Math & Science Initiative (CMSI) program evaluation component was envisioned as the application of systematic research methods to assess program design, delivery, implementation, and effectiveness (Chelimsky, 1985). More specifically, the purpose of CMSI program evaluation was to provide empirical information for program developers, program staff, program managers, senior management, policy makers, and other stakeholders. This information was designed to help:

The OMS team, working with external evaluators, designed evaluation efforts to be responsive to these three areas. They also planned for evaluation to occur throughout the years of program implementation. The fundamental principle of these evaluation efforts was mixed-method inquiry, including surveys, interviews, observations, focus groups, statistical analyses, and archival records. Each method could gather quantitative or qualitative information. The philosophy behind mixed-method inquiry “is to understand more fully, to generate deeper and broader insights, [and] to develop important knowledge claims that respect a wider range of interests and perspectives” (Greene & Caracelli, 1997). The proposed OMS evaluation efforts and goals were consistent with evaluation guiding principles as described by the American Evaluation Association and the Joint Committee on Standards for Educational Evaluation (Shadish, Newman, Scheirer & Wye, 1995).

One of the first positions created within the new Office of Mathematics and Science in 2002 was that of Evaluation Specialist. The Evaluation Specialist served on the OMS Lead Team and worked with the Chief Officer and senior staff to design the Chicago Math & Science Initiative (CMSI). More specifically, the Evaluation Specialist was responsible for understanding the evaluation needs of the district, and for designing and managing formative and summative evaluation plans for OMS activities that could meet those needs. In addition, the Chief Officer hired an evaluation team that was external to and independent of the district. This external team worked with the Evaluation Specialist on the design and implementation of the OMS evaluation plan. Significant resources were dedicated to the evaluation function. While the budget allocation for these efforts varied across the years (from approximately $150,000 to $450,000), approximately $250,000/year was allocated in the OMS budget to program evaluation of the reform effort (not including the salary of the Evaluation Specialist). OMS also leveraged additional funding to support program evaluation activities from external sources such as foundations, competitive grants, and in-kind contributions from university partners. These sources added an additional $100,000 to $250,000 in evaluation funding per year.

Key choices shaping OMS evaluation design

Many complex factors, including the nature of CMSI activities, resource availability, methodologies, confidentiality, objectivity, and access to district-level data were considered in the design of the CMSI evaluation plan. These considerations resulted in a plan that encompassed a strategic and phased-in evaluation approach, a combination of internal and external evaluators, mixed methods, and a strong focus on formative evaluation as well as summative evaluation of outcomes.

Necessity of a strategic and phased-in approach

It was not feasible, nor desirable, to evaluate all aspects of the CMSI simultaneously, so OMS had to prioritize what activities would be evaluated, and when. The first step was to develop an evaluation framework, as a way to manage and organize the evaluation efforts. The framework also provided a template for future evaluation activities. Using this framework, OMS leadership developed annual evaluation plans to coordinate evaluation activities while avoiding redundancies.

In 2003-04, program leaders and evaluators decided to focus evaluation efforts on the following key CMSI activities: adoption of standards-based mathematics and science instructional materials in Intensive Support schools, development of instructional supports, implementation of university courses leading to teacher endorsements in math and science, and student high school course-taking patterns and pass rates. For each of these activities, key evaluation questions were formulated and evaluation activities, including data collection processes and tools, were determined. In later years, evaluation efforts focused on other areas of the initiative.

Centrality of internal and external evaluation

Both internal and external evaluation teams were central to the evaluation plan, as each contributed unique skills and perspectives. The strengths of the internal evaluator team included intimate knowledge of program design and logistics, ability to share data rapidly with key district stakeholders, mathematics and science content expertise, and access to other district evaluators and data. The complementary strengths of the external evaluation team included objectivity, confidentiality, expertise in large-scale qualitative field studies, and additional evaluation capacity.

From 2003-2006 the internal OMS evaluation team included the Evaluation Specialist, one full-time, and two part-time staff evaluators. The Evaluation Specialist attended planning meetings for all major CMSI activities and thus was able to (1) share data that informed decisions, (2) document the planning process, and (3) ensure that the design of activities included a plan for their evaluation from the start. As a team, internal evaluators contributed expertise in using statistical analysis of surveys and databases. They also analyzed the math and science content in different assessment tools, such as the mathematics benchmark assessment. The team coordinated and leveraged other district evaluation resources, such as district-collected data and other district evaluation staff.

Beginning in 2006, based on the positive impact that the work of the internal OMS evaluation staff had on the district, these individuals were transferred by the Chief Education Officer to a new department in the district—the Department of Program Evaluation in the Office of Research, Evaluation, and Accountability—under the direction of the former OMS Evaluation Specialist and charged with replicating their work in other content areas of the district. In this way, the Chief Education Officer hoped to catalyze the regular, and appropriate use of evaluation data to improve district programming, while continuing the CMSI evaluation work.

The external evaluation team from University of Illinois at Chicago provided OMS with a way to gather data from CPS staff (from interviews, observations, surveys, and other sources) that could only be validly collected with the promise of confidentiality. In addition to confidentiality, the external evaluators brought objectivity necessary to the evaluation, since they had a professional distance from OMS and CPS. The external evaluation team also brought experience in evaluating district-wide reform efforts and in mixed-method evaluation, with particular expertise in qualitative data collection and analysis. This increased the district’s evaluation capacity. From 2003 to 2008 the external evaluation team was composed of one member of the research faculty, at least two full-time research professionals, and two to five part-time graduate students.

Multiple evaluation methods

As research has suggested, using mixed methods has great advantages (Frechtling, Sharp, & National Science Foundation (U.S.), 1997). OMS program activities were evaluated using different methods to collect and analyze data, chosen based on the purpose of the evaluation. When possible, the evaluation teams used multiple methods to collect data to triangulate findings from various sources. In addition, some data collection activities were useful to more than one evaluation project. This minimized data collection costs and redundant efforts. Evaluation activities included collection of data from interviews, large-scale surveys, targeted surveys, observations, structured written reflections, and focus groups as appropriate, as well as large-scale test scores and teacher workforce data. Analysis of these data utilized various statistical analytical techniques and both quantitative and qualitative analyses of textual data. Respondents included district leaders, OMS staff, professional developers, school principals, participating teachers and students, and university instructors.

Given the highly political context of educational reform in Chicago, a variety of stakeholders were invested in the success of CMSI. By using multiple evaluation methods, OMS was able address the different concerns of these varied stakeholders--from those who wanted to use student test scores as the basis for decisions, to those who wanted to understand in rich detail how a new policy played out at a typical school and how teachers made sense of it.

Formative process evaluation

Due to the long-term nature of many of the reform effort’s activities, formative evaluation was a critical component of the evaluation plan. For example, OMS leadership knew that changing teachers’ beliefs and instructional practices would take time and sustained efforts. The CMSI relied on professional development as a key strategy to create and institutionalize a culture of student-focused and standards-based teaching and learning in the district. The length of time needed to make these changes made it unwise to wait for summative outcomes – like changes in student test scores – to signal whether or not CMSI professional development was working. The OMS wanted to understand its processes of providing professional development (i.e., what was working well and what was not) with the ability to make mid-course corrections as quickly as possible. Taking this approach would make it easier to explain why certain outcomes occurred (e.g., test score improvement). This approach is philosophically analogous to the Design, Educational Engineering, and Development approach that recently seems to be gaining more visibility (Bryk, 2009).

Previous Page Next Page