Implementation and Timing
How is evidence of student learning, growth, and achievement incorporated into the Student Impact Rating?
Evaluators are responsible for determining a Student Impact Rating of high, moderate, or low for each educator based on patterns and trends using multiple measures of student learning, growth, and/or achievement (statewide growth measures and district-determined measures). Annual data for each educator from at least two measures is needed to establish patterns and trends.
- Patterns refer to results from at least two different measures of student learning, growth and achievement.
- Trends refer to results from at least two years.
Statewide growth measures (e.g., median student growth percentiles (SGPs)) must be used as one measure where available. For more information on determining Student Impact Ratings, read the Impact Rating Guidance
. Resources and guidance to support the identification of common measures, or district-determined measures, are available on ESE's Developing Common Measures webpage
and in the common assessments section of the Guidebook for Inclusive Practice
What is the purpose of the Student Impact Rating?
A key purpose of the evaluation framework is "to promote student learning, growth, and achievement by providing educators with feedback for improvement, enhanced opportunities for professional growth, and clear structures for accountability (603 CMR 35.01(2)(a)
)." The Student Impact Rating is designed to ensure that the evaluation process is focused on students by requiring evaluators and educators to focus specifically on student outcomes from multiple measures.
How will Student Impact Ratings be used?
Student Impact Ratings are used in several ways. First, they determine whether an experienced educator who earns a Summative Performance Ratings of Proficient or Exemplary will be placed on a one- or two-year Self-Directed Growth Plan. Second, when there is a discrepancy between an educator's Summative Performance Rating and Student Impact Rating, the Student Impact Rating serves as a spur to explore and understand the reasons for the discrepancy. Lastly, when combined with an educator's Summative Performance Rating, Student Impact Ratings provide a basis for recognizing and rewarding Exemplary educators and identifying educators who may be eligible for additional roles and responsibilities subject to local collective bargaining agreements.
When will districts issue Student Impact Ratings?
In accordance with the revised implementation timeline, outlined in the Commissioner's August 15th, 2013 memorandum
, ESE will begin collecting Student Impact Ratings from districts at the end of the 2015-2016 school year. Some districts have requested and been granted additional time following the process outlined in Alternative Pathways QRG
. No district has been approved to report Student Impact Ratings later than 2016-17 for some educators and 2017-18 for all educators.
How will student learning, growth, and achievement be assessed for specialized instructional support personnel (SISP) - e.g., nurses and counselors?
For educators whose primary role is not as a classroom teacher, appropriate measures of the educator's contribution to student learning, growth, and achievement must be identified. ESE worked with statewide associations to support the identification of appropriate measures for a variety of specialized instructional support personnel roles. Read the Implementation Brief on Indirect Measures and SISP
and visit the developing common measures webpage
for more information and examples.
Does the Student Impact Rating inform the Summative Performance Rating?
No. The Massachusetts educator evaluation system is designed to allow educators and evaluators to focus on the critical intersection of educator practice and educator impact. Its two independent but linked ratings create a more complete picture of educator performance.
- The Summative Performance Rating assesses an educator's practice against four statewide Standards of Effective Teaching or Administrator Leadership Practice, as well as an educator's progress toward attainment of his/her professional practice and student learning goals. This rating is the final step of the 5-step evaluation cycle.
- The Student Impact Rating is a determination of an educator's impact on student learning, informed by patterns and trends in student learning, growth, and/or achievement based on results from statewide growth measures, where available, and district-determined measures (DDMs).
Taken together, these two ratings will help educators reflect not only on their professional practice, but also the impact they are having on their students' learning. The Summative Performance Rating determines the type of educator plan an educator is placed on and the Student Impact Rating determines the length of that plan for educators who receive a Summative Performance Rating of Exemplary or Proficient.
For more information about the intersection between the Student Impact Rating and Summative Performance Rating, read the Impact Rating Guidance
What are alterative pathways for evaluating educator impact?
In the spring of 2015, ESE provided districts the opportunity to submit a proposal to use an alternative pathway for determining Student Impact Ratings. Alternative pathways were approved if they met the five core principles outlined in the Alternative Pathways QRG
. Districts using an approved alternative pathway are still required to determine Student Impact Ratings based on student outcomes from multiple measures of student learning and must use at least one common assessment as a piece of evidence for each educator.
Statewide Growth Measures
What are Student Growth Percentiles and are they used in the determination of an Educator's Student Impact Rating?
Student Growth Percentiles (SGPs) are measures of student growth based on the statewide model of growth
. These measures have been in place since 2008. Massachusetts measures growth for an individual student by comparing his or her achievement on statewide assessments (e.g. MCAS, PARCC) to that of all other students in the state who had similar historical statewide assessment results (the student's "academic peers").
The median Student Growth Percentile (median SGP) for an educator represents the exact middle SGP score for that educator's students. In other words, half of an educator's students performed above (or below) the median SGP score. The educator evaluation regulations require that statewide growth measures be used in the determination of an educator's Student Impact Rating "where available" (603 CMR 35.09(2)
). For more information, see the Implementation Brief on Using Student Growth Percentiles
For which educators must median Student Growth Percentiles (SGPs) be used as one of the measures used in determining a Student Impact Rating?
A district is required to use median SGPs as one measure to determine a teacher's Student Impact Rating for all teachers who teach 20 or more students for which SGPs in the teachers' content areas (ELA or math), are available. For teachers who are responsible for both Math and ELA instruction in tested grades, the district is only required to use median SGPs from one subject area in the determination of these teachers' Student Impact Ratings, but may choose to use SGPs from both math and ELA. The use of median SGPs is only required when student SGPs are based on the previous year's statewide assessment. As a result, 10th grade SGPs are not required to be used, since students did not complete a statewide assessment during 9th
A district is required to use median SGPs as one of the measures used to determine an administrator's Student Impact Rating if the administrator supervises educators responsible for ELA or math instruction and there are 20 or more students with SGPs in the content area. 10th grade SGPs must be used for administrators whose responsibilities include supervising ELA or math instructors in grades 9 and 10 (e.g., a high school principal). Similar to teachers, districts need to define which administrators are responsible for academic content (i.e., supervise educators who deliver instruction in the content area).
For more information about required and optional use of median SGPs in the determination of Student Impact Ratings, read the Implementation Brief on the Using Student Growth Percentiles
. Read the Implementation Brief on Educators of Students with Disabilities
and the Implementation Brief on Educators of English Language Learners
for information about required and optional use of SGPs for educators these special populations.
Does the change in state assessment and related hold harmless provisions impact educator evaluation implementation?
The Board of Elementary and Secondary Education voted on November 17, 2015 to transition to a next-generation MCAS. Any districts that administer PARCC in spring 2016 will be held harmless for any negative changes in their school and district accountability levels, although the commissioner has authority
to designate a school as Level 5.
The hold harmless provisions in place related to district and school accountability are designed to ensure that districts and schools are not negatively impacted during the transition to a new state assessment. The same principle applies to individual educators. Where available
, student growth percentiles (SGPs) from state assessments must be used to inform an educator's Student Impact Rating. However, during this transition, educators' ratings will not be negatively impacted by SGPs.
Specifically, since the Student Impact Rating is determined by an evaluator's professional judgment there are no prescribed weights or algorithms used to determine Student Impact Ratings evaluators will examine whether SGPs during the transition are discrepant in a negative way from other measures of the educator's impact, and if so, will discount them. The vast majority of educators will be unaffected, because their Student Impact Ratings are not informed by SGPs.
What are District-Determined Measures (DDMs)?
District-determined measures (DDMs) are measures of student learning, growth, or achievement selected by the district. DDMs are designed to provide important feedback to educators about student learning. These measures should be closely aligned to the Massachusetts Curriculum Frameworks, or other relevant frameworks, and be designed to provide comparable evidence of the level of growth demonstrated by different students.
Is a DDM just another test?
Districts have considerable flexibility to identify the best measures to ensure that they are well aligned to content and provide meaningful information to educators. A wide range of assessment types may be used, including: portfolios, performance assessments, projects, or traditional paper and pencil tests. Where applicable, districts are encouraged to use existing assessments
that are aligned to the curriculum and provide meaningful information to educators about their students. Read Technical Guide B
for more information about the characteristics of an ideal DDM.
What types of resources are available to support districts in the identification/development of DDMs?
ESE has published a number of resources to support districts with DDM identification/development. A comprehensive list of ESE supports for DDM identification/development is available on ESE's Student Impact Rating guidance
The DDM implementation Briefs
are short resource documents (similar to our Quick Reference Guides
) focused on specific DDMs topics such as scoring and parameter setting
, using student growth percentiles
, investigating fairness
, using indirect measures for specialized instructional support personnel
, educators of English language learners
and special education
, and continuous improvement
. ESE is grateful for the collaboration with statewide professional associations in developing these briefs.
ESE has also released guidance on coaching teacher teams to develop common measures
for use as DDMs. The Guidebook for Inclusive Practice
also includes tools for reviewing the accessibility of common assessments
and measuring the growth
of students with diverse learning profiles.
Does ESE have approved measures?
No. ESE has supported the development and sharing of common measures
. These example measures can be used and modified by districts. However, districts are ultimately responsible for ensuring that DDMs are of sufficient quality to provide meaningful feedback to educators and evaluators.
If a teacher teaches more than one subject, course, or grade is he/she required to have DDMs for each of them?
Each educator must be matched with at least two measures (DDMs or statewide growth measures). Statewide growth measures must be used as one measure, where available. Districts are not required to identify DDMs for all grade/subjects or courses a given educator teaches.
Can an educator be matched with more than two measures in a given year?
Yes. The regulations (603 CMR 35.09(2)(a)
) describe using "at least two state or district-wide measures" in each year. Districts may use more than two measures subject to local collective bargaining agreements.
Can districts match educators with different DDMs from one year to the next?
Yes. Districts may need or want to change DDMs for a variety of reasons. For example, changes in educator assignment, shifts in local curricula, or emerging district priorities are all potential reasons for districts to change DDMs. Similarly, as part of the continuous improvement of DDMs, districts will be reviewing student results and may, as a result, determine that a DDM must be modified or changed.
How will a district establish trends in learning, growth, or achievement if DDMs change from one year to the next?
DDMs should measure student growth during a single year. As a result, different measures can be used in different years. For example, if a sixth-grade teacher whose typical sixth-grade student showed high growth on two measures that year transfers to eighth grade the following year, and her typical eighth-grade student that year shows high growth on both new measures, that teacher can earn a Student Impact Rating of "high" based on trends and patterns. That said, if an educator changes districts across years, his/her students' results in the previous year cannot be used to construct a trend because of the confidentiality provisions of the regulations.
How can districts ensure DDMs fairly measure impact of teachers of students with disabilities and ELLs?
DDMs should provide all students an equal opportunity to demonstrate growth. Districts should engage educators with specialized knowledge about students with disabilities and ELL students in the selection and improvement of DDMs. Our DDM Implementation Briefs
provide multiple strategies for checking for and addressing bias in measures that districts can explore.
How should a district select DDMs for administrators?
Just as with teachers, districts should engage administrators to contribute to the process of identifying and selecting their DDMs. DDMs for administrators can be specialized measures of student learning from across the school, or aggregates of DDMs used to evaluate other educators, or indirect measures appropriate to the administrator's role. For more information, read ESE's Implementation Brief on DDMs for Administrators
Are there resources related to DDMs for Career/Vocational Technical Education (CVTE) educators?
Yes. ESE worked with CVTE educators and leaders from across the Commonwealth to create resources to support DDM identification/development. Resources
include case studies and examples from CVTE programs in MA, along with guidance to support schools and districts in different stages of DDM implementation.
Are there resources related to DDMs for Specialized Instructional Support Personnel (SISP) (e.g., nurses, school counselors, school psychologists)?
Yes. ESE worked with leaders from statewide SISP associations to publish the Implementation Brief on Indirect Measures and SISP
What about teachers who share students?
Districts should create a definition of "teacher of record." Multiple teachers can meet the definition for a given student or group of students. For example, if a student receives regular English language arts instruction and receives additional lessons from a different teacher, both teachers may meet the definition of teacher of record for that student.