Mass.gov
Massachusetts Department of Elementary and Secondary Education
Go to Selected Program Area
Massachusetts State Seal
Students & Families Educators & Administrators Teaching, Learning & Testing Data & Accountability Finance & Funding About the Department Education Board  
>
>
 
 
>
>
 
 
 
 
>
 
>
 
 
 
 
 
>
>
>
>

Educator Evaluation

Frequently Asked Questions

Section:

  1. Regulations
  2. ESE Supports & Engagement
    1. Communications
    2. Integration with other Initiatives
      1. Licensure and Professional Development
      2. MA Curriculum Frameworks
      3. Educator Preparation
      4. Other District Priorities
    3. Model System
  3. 5-Step Cycle & Summative Performance Rating
    1. Training
    2. Rubrics
    3. Goal Setting
    4. Evidence (including Student and Staff Feedback)
    5. Student and Staff Feedback
    6. Formative Assessment/Evaluation and Summative Performance Rating
  4. Student Impact Rating
    1. Implementation and Timing
    2. Statewide Growth Measures
    3. DDMs
  5. Data Reporting & Confidentiality
  6. View all

stopline

I. Regulations

 
1.
What is the legal basis for the Massachusetts Educator Evaluation Framework and where can I find information on its requirements?
 
The regulations on educator evaluation were adopted pursuant to the Board of Elementary and Secondary Education's statutory authority and consistent with existing statutory requirements: M.G.L. c.69, sec. 1B and c.71 sec. 38. More details on the requirements of the regulations are available on ESE's educator evaluation website. In particular, please see the Regulations for the Evaluation of Educators, 603 CMR 35.00 and the Quick Reference Guide on the MA Educator Evaluation Framework Download PDF Document  Download MS WORD Document.
 
 
 
2.
Which types of schools are covered under the regulations?
 
All district schools and Horace Mann charter schools are subject to the educator evaluation regulations. Educators serving in programs or schools operated by Educational Collaboratives are also covered by the regulations. Commonwealth Charter Schools are not covered by the regulations.
 
 
 
3.
Which types of educators are covered under the regulations?
 
The regulations apply to all teachers, principals, superintendents, and other staff in positions that require an ESE-issued teacher, specialist, professional support personnel, or administrative license (603 CMR 35.00). For a list of ESE-issued licenses, review the Regulations for Educator Licensure and Preparation Program Approval (603 CMR 7.04 (3)), Licenses and Routes for Administrators (603 CMR 7.09), Types of Vocational Technical Teacher Licenses (603 CMR 4.07), Types of Vocational Technical Administrator Licenses (603 CMR 4.08), and Types of Vocational Technical Cooperative Education Coordinator Licenses (603 CMR 4.09).
 
 
 
4.
What was the implementation timeline for the regulations?
 
The regulations were implemented as follows:
  1. Districts with Level 4 schools adopted and implemented new educator evaluation systems in Level 4 schools during the 2011-2012 school year.
  2. RTTT districts and RTTT charter schools adopted and implemented new educator evaluation systems during the 2012-2013 school year with at least 50% of their educators.
  3. All remaining school districts covered under the new regulations implemented new educator evaluation systems during the 2013-2014 school year with at least 50% of their educators.
 
5.
Do the regulations remove evaluation as a subject of mandatory collective bargaining?
 
No. The MA statutes regarding evaluation and collective bargaining have not changed. The regulations established a more comprehensive set of requirements ("principles") of evaluation than prior regulations. All districts are required to include specified core elements in their evaluation systems, but other features are collectively bargained at the local level.
 
 
 
6.
Are video observations allowed by the regulations?
 
Yes. The regulations permit the use of video. How video is used in evaluation is subject to collective bargaining and appropriate laws relating to student privacy.
 
 
 
7.
Is peer review permissible under the regulations?
 
Yes. The regulations permit peer assistance and review programs, if agreed upon during the collective bargaining process at the local level.
 
 
 
8.
How does ESE monitor educator evaluation implementation?
 
ESE monitors implementation through a variety of mechanisms, including reviewing districts' evaluation system submissions and educator performance ratings (see the data collection section of the FAQs for more information). Districts' Title IIA grant applications also require districts to submit information about their educator evaluation systems. ESE's Center for District and School Accountability conducts district reviews that provide an assessment of district systems, including educator evaluation. Additionally, ESE contracted with an external organization to conduct a three-year implementation study of the MA evaluation regulations.
 
 

II. ESE Supports & Engagement

 
a.
Communications
 
1.
What is the best way to receive updates and information about educator evaluation implementation and resources?
 
Subscribing to our e-newsletter is the best way to receive updates. Each issue of the Educator Effectiveness e-Newsletter includes timely information on important resources and tools, and innovative practices from the field as well as details about upcoming events, deadlines and publications.

To receive the e-Newsletter in your inbox, please subscribe. You can read previous newsletters on our website.

Periodic updates are also included in the Commissioner's Weekly Update and can be found online and sent to each district's "Educator Evaluation Contacts" in Directory Administration. For more information about how to assign people to a function (in this case, the Educator Evaluation Contacts) please review the Directory Administration Guide.

Additionally, Teachers' Top Three from ESE is direct communication from the Department intended for teachers. Sent every other week, Top 3 includes information and resources relevant to teachers' work, reflection pieces written by current teachers, and upcoming opportunities to engage with ESE.
Subscribe and read past issues online.
 
 
 
2.
How can I share feedback, success stories, and challenges about educator evaluation with ESE?
 
Educators and districts are encouraged to contact the Educator Evaluation team at EducatorEvaluation@doe.mass.edu. The team welcomes opportunities to learn about implementation so tips and strategies can be shared across districts and inform the development of tools and resources.
 
 
 
3.
How does ESE engage educators to learn about implementation?
 
ESE has several standing advisory cabinets consisting of current MA educators, including a teacher advisory cabinet, two principal advisory cabinets, and a superintendent advisory cabinet. Meeting at least quarterly, these cabinets provide feedback on current educator effectiveness initiatives including educator evaluation, licensure, preparation, etc. Cabinet members provide input on upcoming policies, provide recommendations for resources, and share their experiences as educators implementing educator effectiveness initiatives. For more information about the cabinets, including summaries of their work and information on applying, visit our Communications webpage.

In addition to these advisory cabinets, ESE regularly engages with statewide unions, associations, and districts by attending meetings with educators, presenting at conferences, and responding to phone calls and emails. A member of our team is always available at EducatorEvaluation@doe.mass.edu.
 
 
 
b.
Integration with other Initiatives
 
i.
Licensure and Professional Development
 
1.
How are educator evaluation and professional development connected?
 
Educator evaluation and professional development (PD) both serve to improve educator practice and student outcomes. The evaluation framework highlights PD needs and should be leveraged to identify patterns in PD needs within a school and across the district. For more information about how to align PD and educator evaluation, read the Quick Reference Guide Download PDF Document  Download MS WORD Document.
 
 
 
2.
Do the Educator Plans required under 603 CMR 35.00 (Final Regulations on Evaluation of Educators) change the requirements for recertification under 603 CMR 44.00 (License Renewal)?
 
No. However, certain activities undertaken pursuant to an Educator Plan may meet the requirements for PDPs under the educator's Individual Professional Development Plan.
 
 
 
3.
Can Educator Plans also serve as Individual Professional Development Plans (IPDPs) for license renewal?
 
Yes, the regulations for license renewal (603 CMR 44.04 (1) (c)) do allow for these plans to be the same. Given the license renewal cycle is a five year period and multiple evaluation cycles will occur during that time, there can be some challenges to combining these plans. ESE has released several example forms to help bring these two plans into alignment, including a version of an Educator Plan form and an Educator Plan Addendum.
 
 
 
4.
Are districts required to align approval and endorsement of IPDPs with the Evaluation Cycle?
 
No. However, in many cases it will make sense to do so. Where appropriate and possible, the two processes may be combined to reduce the administrative burden on both educators and administrators.
 
 
 
5.
How can the professional development activities in an Educator Plan count toward an Educator's IPDP?
 
Though governed by two different statues both plans must be consistent with the educational needs of the school and district, be approved by the educator's supervisor, strengthen the educator's knowledge and skills, and enhance the educator's ability to promote student learning. The Educator Plan specifies the kinds of professional development activities educators will pursue to improve their performance and promote student learning.

In many instances the professional development activities described in an Educator Plan will meet the requirements of 603 CMR 44.00 (license renewal).

ESE recommends educators and evaluators:
  • Use a goal setting and plan development conference at the beginning of the evaluation cycle to review and approve Individual Professional Development Plans and to conduct the bi-annual check-in and end of renewal cycle endorsement that are required under 603 CMR 44.00 during the Evaluation Cycle, if practicable.
  • Maintain a running record (by the educator) of the professional development activities undertaken pursuant to their Educator Plan under 603 CMR 35.00 to identify activities that meet the PDP requirements for license renewal under 603 CMR 44.00 and its accompanying guidelines Download MS WORD Document. ESE's Educator Plan Form Download PDF Document  Download MS WORD Document includes a column for educators to track activities eligible for PDPs. The Educator Plan Addendums are resources intended to guide conversations between educators and evaluators when completing Educator Plans.
 
6.
Can I receive and use PDPs for attaining the professional practice goal(s) or student learning, goal(s) of my Educator Plan under the 603 CMR 35.00?
 
If the underlying activities required to meet those individual or team goals are consistent with 603 CMR 44.00 (license renewal) and ESE's guidance on license renewal, you may receive PDPs for these activities.
 
 
 
7.
Are learning walks, lesson study, participation in a professional learning community (PLC), or other "embedded" forms of professional development acceptable if they are included in my Educator Plan?
 
Yes, if they meet the requirements specified in regulations and are consistent with the goals of an educator's approved IPDP. The Department provides examples of a broad range of professional development activities in its Guidelines on Recertification. These include department-sponsored initiatives; initiatives sponsored by districts, Collaboratives, or registered PD providers; school-based activities; as well as, educator designed PD.
 
 
 
ii.
MA Curriculum Frameworks
 
1.
How can educator evaluation support implementation of the MA Curriculum Frameworks?
 
Educator evaluation should be leveraged to assess educators' skills for implementing the MA Curriculum Frameworks which represent key content and skills students should learn. Read the Quick Reference Guide on Educator Evaluation & the MA Curriculum Frameworks Download PDF Document  Download MS WORD Document to learn more about how these two initiatives can be aligned throughout the 5-Step Cycle of Evaluation.
 
 
 
iii.
Educator Preparation
 
1.
How is ESE aligning educator preparation experiences with expectations for teachers and principals?
 
The Candidate Assessment of Performance (CAP) is the new performance assessment for teacher candidates that takes place during student teaching. K-12 educators, educator preparation faculty, and teacher candidates worked with ESE to create an assessment of teacher candidates aligned to the MA Educator Evaluation Framework. This alignment promotes a continuum of professional growth throughout an educator's career-from preparation to full teaching responsibilities-and establishes a common language for talking about educator practice.

The CAP aligns expectations and process with the Educator Evaluation Framework by measuring candidates' practice on key Standards and Indicators Download PDF Document  Download MS WORD Document, and by employing a 5-step cycle. The CAP Pilot will take place during the 2015-16 academic year, and will be fully implemented in 2016-17, replacing the current Pre-service Performance Assessment (PPA). Teachers who have committed to be cooperating teachers, also known as supervising practitioners , for the 2015-16 school year should be aware of the pilot, and should ask the program supervisor by which assessment (CAP or PPA) the teacher candidate will be evaluated. For more information, contact EdPrep@doe.mass.edu.
 
 
 
iv.
Other District Priorities
 
1.
Can districts customize the Educator Evaluation Framework to support district priorities?
 
Yes. The Educator Evaluation Framework is most effective when aligned to locally identified priorities. Many districts go through a process of analyzing ESE's Model Rubrics and identifying particular elements or indicators that most fully represent district priorities. ESE has identified the inclusion of students with diverse learning needs as a statewide priority and has published, as a set of optional tools and resources, the Educator Effectiveness Guidebook for Inclusive Practice. Other resources that highlight districts' work aligning the Educator Evaluation Framework with locally identified priorities include this document chronicling the efforts of eight Massachusetts districts to support the capacity of their evaluators called On Track with Evaluator Capacity and the Transforming Educator Evaluation in Massachusetts (TEEM) videos.
 
 
 
2.
How can districts use data from the Massachusetts Early Warning Indicator System (EWIS) in educator evaluation?
 
ESE released a three-page guidance document on how EWIS data can be used in the educator evaluation cycle. For information about how to incorporate EWIS data in self-assessment, identifying a target population, and understanding classroom context, you can access the EWIS guidance Download PDF Document  Download MS WORD Document. EWIS data is available in Edwin Analytics which can be accessed via the Security Portal.
 
 
 
c.
Model System
 
1.
The regulations refer to a "Model System" developed by ESE. What does it contain?
 
The Model System is a comprehensive educator evaluation system designed by ESE, pursuant to the educator evaluation regulations, 603 CMR 35.00. The eight-part series was developed to support effective implementation of the regulations by districts and schools across the Commonwealth. For an overview of each section of ESE's Model System, visit the Model System webpage.

The parts include:
Download PDF Document  Download MS WORD Document
Part I: District-Level Planning and Implementation Guide
Download PDF Document  Download MS WORD Document
Part II: School-Level Planning and Implementation Guide
Download PDF Document  Download MS WORD Document
Part III: Guide to Rubrics and Model Rubrics for Superintendent, Administrator, and Teacher
Download PDF Document  Download MS WORD Document
Part IV: Model Collective Bargaining Contract Language
Download PDF Document  Download MS WORD Document
Part V: Implementation Guide for Principal Evaluation
Download PDF Document  Download MS WORD Document
Part VI: Implementation Guide for Superintendent Evaluation
Download PDF Document  Download MS WORD Document
Part VII: Rating Educator Impact on Student Learning Using District-Determined Measures of Student Learning
Download PDF Document  Download MS WORD Document
Part VIII: Using Staff and Student Feedback in the Evaluation Process
 
 
 
2.
Do districts have to adopt ESE's Model System?
 
No. Districts can adopt or adapt ESE's Model System, or they may revise their own educator evaluation systems to comply with the regulations. All evaluation systems are subject to ESE's review to ensure the systems are consistent with the regulations (CMR 603 CMR 35.00). ESE's Model System is fully consistent with the regulations.
 
 
 
3.
What opportunities were there for educators to contribute to the creation of tools for the Model System?
 
Working with the field to develop and implement the evaluation regulations has been and remains a priority for ESE. To develop the Model System, ESE worked with 11 early adopter districts, 10 districts implementing the framework in their Level 4 schools, and 4 Educational Collaboratives. ESE also engaged a wide range of stakeholders from state associations.

ESE continues to engage educators in the development of resources, including working with statewide associations for specialized instructional support personnel to develop role-specific resources, practicing principals to develop performance rating guidance, assessment and curriculum coordinators to develop guidance and resources for the identification and implementation of common assessments, and PreK-12 teachers and administrators to develop ways to collect and use student and staff feedback as part of evaluation. Our standing teacher, principal, and superintendent advisory cabinets are also critical vehicles for soliciting field input on new implementation tools and resources.
 
 
 
4.
What guidance and tools has ESE developed to support educator evaluation implementation beyond the Model System?
 
A significant portion of the state's Race to the Top grant has been allocated to support implementation of the evaluation framework at both the state and district level. This work includes the development of the Model System, the identification of support providers, and the creation of a tools and resources designed to familiarize educators with the requirements of the regulations and support effective implementation. To access these resources, visit the educator evaluation webpage.
 
 
 
5.
What guidance is available for supporting special educators in the Educator Evaluation Framework?
 
The Framework strives to highlight commonalities across educators. However, because of the complex job responsibilities of many special educators, the evaluation of these educators within the Framework can represent a unique set of challenges and opportunities. ESE has provided a clearinghouse page containing available guidance for the evaluation of special education personnel, including guidance on using the MCAS-Alternate Assessment as a Common Measure and the Educator Effectiveness Guidebook for Inclusive Practice.
 
 

III. 5-Step Cycle & Summative Performance Rating

 
a.
Training
 
1.
What is required training for educators (SISPs, teachers, administrators)?
 
Per An Act Providing for the Implementation of Education Evaluation Systems in School Districts (Chapter 131 of the Acts of 2012), "All school districts required to adopt and implement evaluation systems consistent with 603 CMR 35.00 … shall provide an evaluation training program developed by the department of elementary and secondary education for all evaluators and for all teachers, principals and administrators required to be evaluated."

ESE developed two educator evaluation training programs: a 6-part series of training modules for evaluators and a 4-part series of training workshops for teachers. The training workshops for teachers are designed for all educators required to be evaluated who do not have evaluator responsibilities. This includes (but is not limited to) classroom teachers, specialized instructional support personnel (guidance counselors, nurses, school psychologists, for example), and instructional specialists. For more detailed information, please see ESE's Quick Reference Guide: Educator Evaluation Training Download PDF Document  Download MS WORD Document.

In 2015, ESE produced a series of videos that explain the two evaluation ratings and each step of the 5-Step Evaluation Cycle. These videos are intended to support training on the evaluation framework for new educators and evaluators and may also be used by educator preparation programs with teacher and administrator candidates.

Per state regulations, "The superintendent is responsible for ensuring that all evaluators have training in the principles of supervision and evaluation" (603 CMR 35.11(7)).
 
 
 
2.
What is the timeframe for required training and is there a required certificate?
 
Timelines for training should be determined by the district. The state does not require educators to receive a certificate of training.
 
 
 
b.
Rubrics
 
1.
Does ESE plan to release additional rubrics for special education teachers, school counselors, nurses, or other specialists?
 
ESE strove to highlight commonalities across educators by only developing four Model rubrics. ESE does not plan to create additional rubrics. However, in partnership with a range of professional organizations1, ESE has published a series of role-specific resources for school counselors, school business officials, school nurses, school psychologists, school librarians, occupational and physical therapists, and speech language pathologists. These resources do not replace the four Model rubrics but rather enhance them through a variety of approaches. For more information please see ESE's website on Rubrics.
 
 
 
2.
Can the rubric be used as an observation tool?
 
The rubrics are written to support educators and evaluators in making judgments about patterns of evidence, gathered across multiple points in time. Observation is a valuable way to gather evidence on performance against many, but not all, of the Standards and Indicators. The classroom teacher rubric, for example, includes many elements and Indicators that can only be assessed through means other than observation. The rubric has not been designed to be a classroom observation tool and should not be used for that purpose.
 
 
 
3.
Do educators need to be evaluated on all four Standards every year?
 
Yes. Educators need to be evaluated on all four Standards every year. ESE requires districts to report ratings on each of the four Standards as well as an overall Summative Performance Rating for every educator on an annual basis. For educators on plans one year or less in duration, they will receive ratings on each of the four Standards, as well as an overall Summative Performance Rating, at the conclusion of their evaluation cycle. For educators on 2-year self-directed growth plans, a Formative Evaluation takes place at the end of year 1 (usually May or June), at which point they will receive ratings on each of the four Standards, as well as an overall Summative Performance Rating. Formative Evaluation ratings default to the prior Summative Evaluation Rating unless there is significant evidence suggesting a change (603 CMR 35.06(5)(b)).
 
 
 
c.
Goal Setting
 
1.
How many goals are educators required to identify during each evaluation cycle?
 
Educators are required to propose a minimum of one student learning goal and one professional practice goal (603 CMR 35.06 (3)(f)). In addition to these two goals, superintendents are encouraged to propose 3-5 district improvement goals (see Implementation Guide for Superintendent Evaluation Download PDF Document  Download MS WORD Document) and principals are encouraged to propose 3-5 school improvement goals (see Implementation Guide for Principal Evaluation Download PDF Document  Download MS WORD Document). For more information about the goal setting process, review the Goal Setting Sections in the School-Level Planning and Implementation Guide Download PDF Document  Download MS WORD Document and the District-Level Planning and Implementation Guide Download PDF Document  Download MS WORD Document.

The Transforming Educator Evaluation in Massachusetts (TEEM) Video Series includes local educators discussing the goal setting along with other components of the 5-Step Evaluation Cycle.
 
 
 
2.
Can educators identify team goals?
 
Yes. Educators are encouraged to identify team goals (e.g. content area, grade level, administration, etc.) (603 CMR 35.06 (3)(b)). Additionally, educators and educator teams are encouraged to align goals to school and district priorities. ESE's SMART Goal development protocol and templates are available on the educator evaluation resources webpage. There are also resources to support goal development in the Guidebook for Inclusive Practice, as well as TEEM video content on Goal Setting.
 
 
 
3.
How is attainment of goals assessed?
 
Much of the evidence educators and evaluators collect documents progress toward meeting goals. Specifically, the evidence collected should demonstrate completion of action steps and the attainment of key benchmarks. The evaluator should assess all of the evidence related to an educator's goals and determine the extent to which the educator is progressing toward each goal (Formative Assessment/Evaluation) and, ultimately, whether or not the educator meets each goal (Summative Evaluation). For more information, read ESE's Performance Rating Guidance.
 
 
 
4.
Do the student learning and professional practice goals required in the educator evaluation regulations replace the goals on the Individual Professional Development Plans (IPDP) for educators?
 
 
 
 
d.
Evidence
 
1.
Are educators required to provide evidence for every Indicator on the rubric?
 
There needs to be enough evidence associated with each Standard such that a rating on a given Standard can be supported. The body of evidence should be aligned to the individual educator's goals, the focus of his/her evaluation, as well school and district priorities. Read our Evidence Collection Toolkit Download PDF Document  Download MS WORD Document for guidance and district strategies for clear and meaningful evidence collection. Additional tools and resources around effective and efficient evidence collection include a brief on Professional Development to Support Evidence Collection from Brockton Public Schools Download PDF Document  Download MS WORD Document, Analyzing Artifacts tools in the Guidebook for Inclusive Practice, and TEEM video content on Evidence Collection.
 
 
 
2.
How many pieces of evidence are educators required to collect?
 
There is no minimum or maximum requirement associated with evidence collection. Educators and evaluators should agree on the expectations for evidence related to the educator's goals, as well as his/her practice across the four Standards. Educators and evaluators should think strategically about evidence collection, keeping in mind that one piece of evidence often reflects practice associated with multiple Standards and Indicators. For more information on evidence collection, review Module 5: Gathering Evidence, Teacher Workshop 4: Gathering Evidence, and the Evidence Collection Toolkit Download PDF Document  Download MS WORD Document.
 
 
 
3.
Do educators or districts need to submit educator evaluation evidence to ESE?
 
No. ESE does not collect any evidence (such as artifacts of practice or notes from observations) from individual educators or districts. It is up to individual districts to determine how evidence of educator practice will be collected and retained.
 
 
 
e.
Student and Staff Feedback
 
1.
What types of feedback must be incorporated into educator evaluations?
 
Each district must collect student feedback for use in educator evaluations and staff feedback for use in administrator evaluations. Part VIII of the Model System Download PDF Document  Download MS WORD Document includes guidance on collecting and analyzing student and staff feedback.
 
 
 
2.
Who is required to use student and staff feedback?
 
According to the regulations (603 CMR 35.07 (1)), student feedback is a required piece of evidence for all educators and staff feedback is required for administrators.
 
 
 
3.
Are districts required to incorporate feedback from students with disabilities in educator evaluation?
 
While the regulations do not specify student populations, feedback from a representative sample of an educator's student population should be incorporated. According to the Administration Protocol Download PDF Document  Download MS WORD Document for the MA Model Survey, "Collecting feedback from students with special needs is a valuable part of the evaluation process. Districts should make every effort to include all students, or a representative sample of all students, in their feedback collection. When students with disabilities engage in providing feedback, any accommodations must be consistent with IEPs and 504 Plans."
 
 
 
4.
How much does student and staff feedback "count" in an educator's evaluation?
 
Consistent with other guidance, there is no point value or numerical weight associated with feedback in an educator's evaluation. Districts have the flexibility to determine how student and staff feedback informs the evaluation process. Student and staff feedback may be gathered at multiple points in the 5-step evaluation cycle and considered formatively, summatively, or both. ESE is recommending student and staff feedback be used to inform an educator's self-assessment, shape his or her goal-setting process, and/or demonstrate changes in practice over time.
 
 
 
5.
What tools and resources has ESE provided to help districts implement student and staff feedback?
 
ESE has developed model survey instruments for collecting student and staff feedback:
  • Student surveys about classroom teacher practice (for students in grades 3-5 and 6-12)
  • Staff surveys about school leadership practice (including principals, assistant principals, directors, etc.)
  • Discussion prompts for K-2 students about classroom teacher practice
The ESE Model Feedback Surveys are optional for districts and are available in short and long forms. Survey items were developed, tested, and refined through a rigorous pilot project in the 2013-14 school year, a detailed description of which is included in Appendix D of Part VIII Download PDF Document  Download MS WORD Document. The model surveys have the following characteristics:
More information about the ESE Model Feedback Surveys and related guidance is available on the student & staff feedback webpage.
 
 
 
6.
Do districts have flexibility in the identification of feedback instruments for educators?
 
Yes. Districts may choose to implement district-wide feedback instruments, such as student or staff surveys, or they may create processes by which educators and evaluators can identify feedback instruments at the individual educator level (educator-specific instruments). These approaches are not mutually exclusive, and leaders may settle on a combination of district-wide and educator-specific instruments in order to best meet the needs of all educators. ESE has provided sample alternate approaches to collecting feedback in Part VIII of the Model System Download PDF Document  Download MS WORD Document and the Guidebook for Inclusive Practice.

Districts are not required to adopt the model surveys. ESE recognizes that many districts may already have a history of collecting student and staff feedback (e.g., through the use of surveys). The model surveys are an available resource, aligned to the MA Standards and Indicators, but are not required.
 
 
 
7.
Were educators involved in the development of the ESE Model Feedback Surveys?
 
Yes. ESE is indebted to the 10,000 students and 1,500 staff who piloted survey items during the 2013-14 school year, and to the more than 2,200 students, parents, teachers, and school and district administrators who provided input along the way. For more information about the survey development process, including stakeholder engagement, read Appendix D of Part VIII Download PDF Document  Download MS WORD Document.
 
 
 
f.
Formative Assessment/Evaluation and Summative Performance Rating
 
1.
Does ESE expect a certain percentage of educator ratings at each performance level?
 
No. There are no expectations that a certain percentage of educators within a school or district fall into each Summative Rating performance level (Exemplary, Proficient, Needs Improvement and Unsatisfactory). Please note that Proficient is a rigorous yet attainable level of practice, indicating that the educator has met all expectations for a given Standard.
 
 
 
2.
How do you evaluate an educator on Standards not covered by his/her goals?
 
Educator goals may or may not address practice across all four Standards. When evaluating an educator's practice related to a Standard not addressed by the educator's goals, the evaluator may use observational evidence, artifacts of practice specific to that Standard, and/or relevant measures of student learning, growth and achievement. Rubrics provide an organizing framework for evaluators when analyzing evidence related to Standards. Educators and evaluators should think strategically about evidence collection, keeping in mind that one piece of evidence often reflects practice associated with multiple Standards and Indicators. For more information about evidence, please see Module 5: Gathering Evidence, Teacher Workshop 4: Gathering Evidence, and the Evidence Collection Toolkit Download PDF Document  Download MS WORD Document.
 
 
 
3.
What is the difference between a Formative Assessment and a Formative Evaluation?
 
For educators on plans that are one year or less in duration, the Formative Assessment takes place mid-way through the cycle (typically January or February for a one-year plan). Evaluators may give ratings on goals and/or practice related to the Standards; ratings are not required. For educators on 2-year self-directed growth plans, a Formative Evaluation takes place at the end of year 1 (usually May or June). ESE requires districts to report ratings on each of the four Standards as well as an overall performance rating. Formative Evaluation ratings default to the prior Summative Evaluation Rating unless there is significant evidence suggesting a change (603 CMR 35.06(5)(b)).
 
 
 
4.
How are student learning, growth, and achievement incorporated into the Summative Performance Rating?
 
Evidence of student learning, growth, and achievement plays a significant factor in the Summative Performance Rating in two ways. First, multiple measures of student learning, growth, and achievement are a required source of evidence. An evaluator will review outcomes from student measures that an educator has collected to make judgments about the effectiveness of the educator's practice related to one or more of the four Standards. Such evidence may be from classroom assessments, projects, portfolios, and district or state assessments. Second, evaluators must consider progress toward attainment of the educator's student learning goal when determining the Summative Performance Rating.

For more information on determining Summative Performance Ratings, read the Performance Rating Guidance Download PDF Document  Download MS WORD Document and associated practice worksheets Download PDF Document  Download MS WORD Document.
 
 
 
5.
Does the Summative Performance Rating inform the Student Impact Rating?
 
No. The Massachusetts educator evaluation system is designed to allow educators and evaluators to focus on the critical intersection of educator practice and educator impact. Its two independent but linked ratings create a more complete picture of educator performance.
  • The Summative Performance Rating assesses an educator's practice against four statewide Standards of Effective Teaching or Administrator Leadership Practice, as well as an educator's progress toward attainment of his/her professional practice and student learning goals. This rating is the final step of the 5-step evaluation cycle.
  • The Student Impact Rating is a determination of an educator's impact on student learning, informed by patterns and trends in student learning, growth, and/or achievement based on results from statewide growth measures, where available, and district-determined measures (DDMs).
Taken together, these two ratings will help educators reflect not only on their professional practice, but also the impact they are having on their students' learning. The Summative Performance Rating determines the type of educator plan an educator is placed on and the Student Impact Rating determines the length of that plan for educators who receive a Summative Performance Rating of Exemplary or Proficient. A visual, video-based tutorial on the two ratings and their relationship to Educator Plans is available online.
 
 

IV. Student Impact Rating

 
a.
Implementation and Timing
 
1.
How is evidence of student learning, growth, and achievement incorporated into the Student Impact Rating?
 
Evaluators are responsible for determining a Student Impact Rating of high, moderate, or low for each educator based on patterns and trends using multiple measures of student learning, growth, and/or achievement (statewide growth measures and district-determined measures). Annual data for each educator from at least two measures is needed to establish patterns and trends.
  • Patterns refer to results from at least two different measures of student learning, growth and achievement.
  • Trends refer to results from at least two years.
Statewide growth measures (e.g., median student growth percentiles (SGPs)) must be used as one measure where available. For more information on determining Student Impact Ratings, read the Impact Rating Guidance Download PDF Document  Download MS WORD Document. Resources and guidance to support the identification of common measures, or district-determined measures, are available on ESE's Developing Common Measures webpage and in the common assessments section of the Guidebook for Inclusive Practice.
 
 
 
2.
What is the purpose of the Student Impact Rating?
 
A key purpose of the evaluation framework is "to promote student learning, growth, and achievement by providing educators with feedback for improvement, enhanced opportunities for professional growth, and clear structures for accountability (603 CMR 35.01(2)(a))." The Student Impact Rating is designed to ensure that the evaluation process is focused on students by requiring evaluators and educators to focus specifically on student outcomes from multiple measures.
 
 
 
3.
How will Student Impact Ratings be used?
 
Student Impact Ratings are used in several ways. First, they determine whether an experienced educator who earns a Summative Performance Ratings of Proficient or Exemplary will be placed on a one- or two-year Self-Directed Growth Plan. Second, when there is a discrepancy between an educator's Summative Performance Rating and Student Impact Rating, the Student Impact Rating serves as a spur to explore and understand the reasons for the discrepancy. Lastly, when combined with an educator's Summative Performance Rating, Student Impact Ratings provide a basis for recognizing and rewarding Exemplary educators and identifying educators who may be eligible for additional roles and responsibilities subject to local collective bargaining agreements.
 
 
 
4.
When will districts issue Student Impact Ratings?
 
In accordance with the revised implementation timeline, outlined in the Commissioner's August 15th, 2013 memorandum, ESE will begin collecting Student Impact Ratings from districts at the end of the 2015-2016 school year. Some districts have requested and been granted additional time following the process outlined in Alternative Pathways QRG Download PDF Document  Download MS WORD Document. No district has been approved to report Student Impact Ratings later than 2016-17 for some educators and 2017-18 for all educators.
 
 
 
5.
How will student learning, growth, and achievement be assessed for specialized instructional support personnel (SISP) - e.g., nurses and counselors?
 
For educators whose primary role is not as a classroom teacher, appropriate measures of the educator's contribution to student learning, growth, and achievement must be identified. ESE worked with statewide associations to support the identification of appropriate measures for a variety of specialized instructional support personnel roles. Read the Implementation Brief on Indirect Measures and SISP Download PDF Document  Download MS WORD Document and visit the developing common measures webpage for more information and examples.
 
 
 
6.
Does the Student Impact Rating inform the Summative Performance Rating?
 
No. The Massachusetts educator evaluation system is designed to allow educators and evaluators to focus on the critical intersection of educator practice and educator impact. Its two independent but linked ratings create a more complete picture of educator performance.
  • The Summative Performance Rating assesses an educator's practice against four statewide Standards of Effective Teaching or Administrator Leadership Practice, as well as an educator's progress toward attainment of his/her professional practice and student learning goals. This rating is the final step of the 5-step evaluation cycle.
  • The Student Impact Rating is a determination of an educator's impact on student learning, informed by patterns and trends in student learning, growth, and/or achievement based on results from statewide growth measures, where available, and district-determined measures (DDMs).
Taken together, these two ratings will help educators reflect not only on their professional practice, but also the impact they are having on their students' learning. The Summative Performance Rating determines the type of educator plan an educator is placed on and the Student Impact Rating determines the length of that plan for educators who receive a Summative Performance Rating of Exemplary or Proficient.

For more information about the intersection between the Student Impact Rating and Summative Performance Rating, read the Impact Rating Guidance Download PDF Document  Download MS WORD Document.
 
 
 
7.
What are alterative pathways for evaluating educator impact?
 
In the spring of 2015, ESE provided districts the opportunity to submit a proposal to use an alternative pathway for determining Student Impact Ratings. Alternative pathways were approved if they met the five core principles outlined in the Alternative Pathways QRG Download PDF Document  Download MS WORD Document. Districts using an approved alternative pathway are still required to determine Student Impact Ratings based on student outcomes from multiple measures of student learning and must use at least one common assessment as a piece of evidence for each educator.
 
 
 
b.
Statewide Growth Measures
 
1.
What are Student Growth Percentiles and are they used in the determination of an Educator's Student Impact Rating?
 
Student Growth Percentiles (SGPs) are measures of student growth based on the statewide model of growth. These measures have been in place since 2008. Massachusetts measures growth for an individual student by comparing his or her achievement on statewide assessments (e.g. MCAS, PARCC) to that of all other students in the state who had similar historical statewide assessment results (the student's "academic peers").

The median Student Growth Percentile (median SGP) for an educator represents the exact middle SGP score for that educator's students. In other words, half of an educator's students performed above (or below) the median SGP score. The educator evaluation regulations require that statewide growth measures be used in the determination of an educator's Student Impact Rating "where available" (603 CMR 35.09(2)). For more information, see the Implementation Brief on Using Student Growth Percentiles Download PDF Document  Download MS WORD Document.
 
 
 
2.
For which educators must median Student Growth Percentiles (SGPs) be used as one of the measures used in determining a Student Impact Rating?
 
A district is required to use median SGPs as one measure to determine a teacher's Student Impact Rating for all teachers who teach 20 or more students for which SGPs in the teachers' content areas (ELA or math), are available. For teachers who are responsible for both Math and ELA instruction in tested grades, the district is only required to use median SGPs from one subject area in the determination of these teachers' Student Impact Ratings, but may choose to use SGPs from both math and ELA. The use of median SGPs is only required when student SGPs are based on the previous year's statewide assessment. As a result, 10th grade SGPs are not required to be used, since students did not complete a statewide assessment during 9th grade.

A district is required to use median SGPs as one of the measures used to determine an administrator's Student Impact Rating if the administrator supervises educators responsible for ELA or math instruction and there are 20 or more students with SGPs in the content area. 10th grade SGPs must be used for administrators whose responsibilities include supervising ELA or math instructors in grades 9 and 10 (e.g., a high school principal). Similar to teachers, districts need to define which administrators are responsible for academic content (i.e., supervise educators who deliver instruction in the content area).

For more information about required and optional use of median SGPs in the determination of Student Impact Ratings, read the Implementation Brief on the Using Student Growth Percentiles Download PDF Document  Download MS WORD Document. Read the Implementation Brief on Educators of Students with Disabilities Download PDF Document  Download MS WORD Document and the Implementation Brief on Educators of English Language Learners Download PDF Document  Download MS WORD Document for information about required and optional use of SGPs for educators these special populations.
 
 
 
3.
Does the change in state assessment and related hold harmless provisions impact educator evaluation implementation?
 
The Board of Elementary and Secondary Education voted on November 17, 2015 to transition to a next-generation MCAS. Any districts that administer PARCC in spring 2016 will be held harmless for any negative changes in their school and district accountability levels, although the commissioner has authority to designate a school as Level 5.

The hold harmless provisions in place related to district and school accountability are designed to ensure that districts and schools are not negatively impacted during the transition to a new state assessment. The same principle applies to individual educators. Where available Download PDF Document  Download MS WORD Document, student growth percentiles (SGPs) from state assessments must be used to inform an educator's Student Impact Rating. However, during this transition, educators' ratings will not be negatively impacted by SGPs.

Specifically, since the Student Impact Rating is determined by an evaluator's professional judgment — there are no prescribed weights or algorithms used to determine Student Impact Ratings — evaluators will examine whether SGPs during the transition are discrepant in a negative way from other measures of the educator's impact, and if so, will discount them. The vast majority of educators will be unaffected, because their Student Impact Ratings are not informed by SGPs.
 
 
 
c.
DDMs
 
1.
What are District-Determined Measures (DDMs)?
 
District-determined measures (DDMs) are measures of student learning, growth, or achievement selected by the district. DDMs are designed to provide important feedback to educators about student learning. These measures should be closely aligned to the Massachusetts Curriculum Frameworks, or other relevant frameworks, and be designed to provide comparable evidence of the level of growth demonstrated by different students.
 
 
 
2.
Is a DDM just another test?
 
Districts have considerable flexibility to identify the best measures to ensure that they are well aligned to content and provide meaningful information to educators. A wide range of assessment types may be used, including: portfolios, performance assessments, projects, or traditional paper and pencil tests. Where applicable, districts are encouraged to use existing assessments Download PDF Document  Download MS WORD Document that are aligned to the curriculum and provide meaningful information to educators about their students. Read Technical Guide B Download PDF Document  Download MS WORD Document for more information about the characteristics of an ideal DDM.
 
 
 
3.
What types of resources are available to support districts in the identification/development of DDMs?
 
ESE has published a number of resources to support districts with DDM identification/development. A comprehensive list of ESE supports for DDM identification/development is available on ESE's Student Impact Rating guidance page.

The DDM implementation Briefs are short resource documents (similar to our Quick Reference Guides) focused on specific DDMs topics such as scoring and parameter setting Download PDF Document  Download MS WORD Document, using student growth percentiles Download PDF Document  Download MS WORD Document, investigating fairness Download PDF Document  Download MS WORD Document, using indirect measures for specialized instructional support personnel Download PDF Document  Download MS WORD Document, administrators Download PDF Document  Download MS WORD Document, educators of English language learners Download PDF Document  Download MS WORD Document and special education Download PDF Document  Download MS WORD Document, and continuous improvement Download PDF Document  Download MS WORD Document. ESE is grateful for the collaboration with statewide professional associations in developing these briefs.

ESE has also released guidance on coaching teacher teams to develop common measures for use as DDMs. The Guidebook for Inclusive Practice also includes tools for reviewing the accessibility of common assessments Download PDF Document  Download MS WORD Document and measuring the growth Download PDF Document  Download MS WORD Document of students with diverse learning profiles.
 
 
 
4.
Does ESE have approved measures?
 
No. ESE has supported the development and sharing of common measures. These example measures can be used and modified by districts. However, districts are ultimately responsible for ensuring that DDMs are of sufficient quality to provide meaningful feedback to educators and evaluators.
 
 
 
5.
If a teacher teaches more than one subject, course, or grade is he/she required to have DDMs for each of them?
 
Each educator must be matched with at least two measures (DDMs or statewide growth measures). Statewide growth measures must be used as one measure, where available. Districts are not required to identify DDMs for all grade/subjects or courses a given educator teaches.
 
 
 
6.
Can an educator be matched with more than two measures in a given year?
 
Yes. The regulations (603 CMR 35.09(2)(a)) describe using "at least two state or district-wide measures" in each year. Districts may use more than two measures subject to local collective bargaining agreements.
 
 
 
7.
Can districts match educators with different DDMs from one year to the next?
 
Yes. Districts may need or want to change DDMs for a variety of reasons. For example, changes in educator assignment, shifts in local curricula, or emerging district priorities are all potential reasons for districts to change DDMs. Similarly, as part of the continuous improvement of DDMs, districts will be reviewing student results and may, as a result, determine that a DDM must be modified or changed.
 
 
 
8.
How will a district establish trends in learning, growth, or achievement if DDMs change from one year to the next?
 
DDMs should measure student growth during a single year. As a result, different measures can be used in different years. For example, if a sixth-grade teacher whose typical sixth-grade student showed high growth on two measures that year transfers to eighth grade the following year, and her typical eighth-grade student that year shows high growth on both new measures, that teacher can earn a Student Impact Rating of "high" based on trends and patterns. That said, if an educator changes districts across years, his/her students' results in the previous year cannot be used to construct a trend because of the confidentiality provisions of the regulations.
 
 
 
9.
How can districts ensure DDMs fairly measure impact of teachers of students with disabilities and ELLs?
 
DDMs should provide all students an equal opportunity to demonstrate growth. Districts should engage educators with specialized knowledge about students with disabilities and ELL students in the selection and improvement of DDMs. Our DDM Implementation Briefs provide multiple strategies for checking for and addressing bias in measures that districts can explore.
 
 
 
10.
How should a district select DDMs for administrators?
 
Just as with teachers, districts should engage administrators to contribute to the process of identifying and selecting their DDMs. DDMs for administrators can be specialized measures of student learning from across the school, or aggregates of DDMs used to evaluate other educators, or indirect measures appropriate to the administrator's role. For more information, read ESE's Implementation Brief on DDMs for Administrators Download PDF Document  Download MS WORD Document.
 
 
 
11.
Are there resources related to DDMs for Career/Vocational Technical Education (CVTE) educators?
 
Yes. ESE worked with CVTE educators and leaders from across the Commonwealth to create resources to support DDM identification/development. Resources include case studies and examples from CVTE programs in MA, along with guidance to support schools and districts in different stages of DDM implementation.
 
 
 
12.
Are there resources related to DDMs for Specialized Instructional Support Personnel (SISP) (e.g., nurses, school counselors, school psychologists)?
 
Yes. ESE worked with leaders from statewide SISP associations to publish the Implementation Brief on Indirect Measures and SISP Download PDF Document  Download MS WORD Document.
 
 
 
13.
What about teachers who share students?
 
Districts should create a definition of "teacher of record." Multiple teachers can meet the definition for a given student or group of students. For example, if a student receives regular English language arts instruction and receives additional lessons from a different teacher, both teachers may meet the definition of teacher of record for that student.
 
 

V. Data Reporting & Confidentiality

 
1.
What are the reporting requirements with respect to the regulations?
 
Section 35.11(5) of the regulations specifies that "Districts shall provide the Department with individual educator evaluation data for each educator in the district in a form and manner prescribed by the Commissioner, including, but not limited to: a) the educator's performance rating on each standard and overall; b) the educator's professional Teacher status; c) the educator's impact on student learning, growth and achievement (low, moderate, high)."

For more information on educator evaluation rating data reporting, please refer to the Quick Reference Guide on Data Collection Download PDF Document  Download MS WORD Document.
 
 
 
2.
Will ESE require submission of formative as well as summative ratings?
 
Ratings on formative assessments (the midpoint of a plan of one-year or less in duration) are not reported, but ratings on formative evaluations are reported. Under the regulations, only experienced educators who are on two-year Self-Directed Growth Plans receive formative evaluations. These are completed at the end of the first year of their two-year plan. Per the regulations, formative evaluation ratings "…shall be assumed to be the same as the previous summative rating unless evidence demonstrates a significant change in performance in which case the rating on Performance Standards may change." 603 CMR 35.06(5)(b)
 
 
 
3.
What will ESE require districts to report with regard to individual educators' Student Impact Ratings and when?
 
Districts will report a rating of Low, Moderate, or High Impact for individual educators, based on at least two years of data and at least two measures in each year. ESE will begin collecting Impact Ratings following the 2015-2016 school year. ESE does not intend to collect individual educator information on which measures were used to determine an Impact Rating or student data from those measures.
 
 
 
4.
What educator evaluation data is made public?
 
ESE publicly reports aggregate data that do not identify individual educators. These data include Performance Ratings in aggregate at the state, district, and school levels and will include Student Impact Ratings when available, beginning with data from the 2015-2016 school year. These data are further disaggregated by educator subgroups: administrators; principals (a subset of administrators); non-administrators; teachers (a subset of non-administrators); teachers with professional teacher status or "PTS" (a subset of teachers); and non-PTS teachers (a subset of teachers). Each of these educator subgroups is defined by specific EPIMS job codes.

Public reports can be found on the Profiles webpage. Further details about data reporting and the educator subgroups are available on this page in the document titled, "More information about the data."
 
 
 
5.
How is the confidentiality of individual educator evaluation data protected?
 
Per state regulation, "Any data or information that school districts or the Department or both create, send, or receive in connection with educator evaluation that is evaluative in nature and may be linked to an individual educator, including information concerning an educator's formative assessment or evaluation or summative evaluation or performance rating or the student learning, growth, and achievement data that may be used as part of an individual educator's evaluation, shall be considered personnel information within the meaning of M.G.L. c. 4, § 7(26)(c) and shall not be subject to disclosure under the public records law." (From 603 CMR 35.11(6)) This is also protected under state legislation (see Section 2 of "An Act Providing for the Implementation of Education Evaluation Systems in School Districts").

ESE has also taken additional precautions to ensure that individual educator ratings cannot be identified by applying rigorous "suppression rules" to the public release of the data. Where cells are blank on the performance rating data for a district, school, or educator subgroup on the Profiles website, this indicates that ESE has suppressed data for at least one of three reasons: 1) the number of staff evaluated was fewer than six; 2) all staff evaluated in a group received the same rating, and/or 3) all educators were evaluated and a single educator had a rating different from all other educators in the group.

Information related to the evaluation of a superintendent, however, is an exception. The Open Meeting Law carves out an exception from the Public Records Law for "materials used in a performance evaluation of an individual bearing on his professional competence," that were created by members of a public body and used during a meeting. See G.L. c. 30A, s.22(e). Individual evaluations created and used by members of a public body for the purpose of evaluating an employee are public records. For more information please see appendix J of the Implementation Guide for Superintendent Evaluation Download PDF Document  Download MS WORD Document.
 
 
 
6.
Do the regulations protect the confidentiality of information collected for the purpose of educator evaluation beyond the data reported to ESE?
 
Yes, for all educators other than the superintendent, the regulations guarantee that any information concerning an educator's formative assessment, formative evaluation or summative evaluation is considered personnel information and is not subject to disclosure under public records law.
 
 

line



Last Updated: December 30, 2015
E-mail this page| Print View| Print Pdf  
Massachusetts Department of Elementary and Secondary Education Search·Public Records Requests · A-Z Site Index · Policies · Site Info · Contact ESE