Mass.gov
Massachusetts Department of Elementary and Secondary Education
Go to Selected Program Area
Massachusetts State Seal
Students & Families Educators & Administrators Teaching, Learning & Testing Data & Accountability Finance & Funding About the Department Education Board  
>
>
>
>

Massachusetts Performance Assessment for Leaders (PAL)

Performance Assessment for Leaders: Information from the 2014-15 Field Trial and Policies and Procedures for Program Year 2015-16

To:Principal/Assistant Principal Preparation Programs
From:Elizabeth Losee, Assistant Director for Educator Preparation and Educator Assessment
Margaret Terry Orr, Bank Street College of Education, PAL Assessment Development Director
Date:September 30, 2015

stopline

As the PAL Program Year 2015-2016 gets underway, we want to share our experiences with the Field Trial, the lessons learned and changes made for this year, and new policies and procedures for this year.

The Field Trial

The Field Trial was conducted from September 1, 2014 through May 15, 2015. Below are results of this Field Trial.

Enrollment: During this period, 769 candidates enrolled for the Field Trial, most during the last quarter. Of these, 477 submitted all four tasks, with most work uploaded during the final few weeks.

Number of Submitted TasksNumber of CandidatesPercent of total submissions
4 Tasks Submitted47762.0%
3 Tasks Submitted91.2%
2 Tasks Submitted364.6%
1 Task Submitted689.0%
None Submitted17923.3%
Total769 

The total number of candidates with complete, scorable submissions was 416. The other candidates had one or more submissions which were returned and will need to resubmit as part of full implementation, or their work is under review for irregularities.

Scorer Recruitment and Training: Scorer recruitment and training began in 2013, continued throughout the Field Trial period and will continue throughout the 2015-16 academic year. Of those who signed up and participated in scorer training on-line or in-person, 30 completed the training and were certified. Scorer training was delayed by the lack of sufficient task submissions to use for training purposes. Early submissions often were not well aligned with the task instructions and rubrics, and thus were poor exemplars for training purposes.

All trained and certified scorers are from Massachusetts: the majority of our scoring team are current or former school and district leaders, while the remaining are current program faculty. All scorers meet certification requirements before being eligible to score.

Scoring Issues and Considerations: There were three primary issues with candidate submissions that affected scoring and delayed the scoring process: not blinding submissions, submitting incomplete work, and instances of cheating or plagiarism. Identifying work products with these issues and resolving them slowed the scoring process considerably.

Candidates are required to remove all personally identifying information from their submissions, including information that would identify themselves, their schools, students and staff. This is explained in the Candidate Assessment Handbook as well as on the document required to be submitted for each task. Work products that were not blinded were returned to candidates to revise and resubmit. About 20% of the work products, particularly those submitted in the final weeks of the Field Trial, were not sufficiently blinded and included personally identifying information.

To qualify for licensure during the Field Trial, candidates had to submit complete work products for all four tasks according to the task descriptions and the task rubrics in the Candidate Assessment Handbook. While the ShowEvidence assessment management system registers when candidates have submitted work for all required artifacts, categories and commentaries (and stating that their submission was complete), submissions were not deemed to be complete if submitted work was not responsive to the task. To be determined as scorable, submitted work had to match the task required instructions (e.g. a post-conference video or a description of an implemented strategy) and have sufficient evidence to be scorable at least at the rubrics' lowest levels. Any work flagged as not responsive/not scorable was reviewed by a second or even third scorer for final determination. Most follow-up reviews concurred and the work was returned. In a few instances, candidates were offered another opportunity to revise and resubmit their work, because the work was evaluated to be close to complete. The rest were instructed to prepare work for the 2015-16 Program Year cycle.

In all, eight percent of the enrolled candidates with four submissions had one or more task submissions that were determined to not be responsive to the task, rendering their work as not scorable for the task.

We found a few cases (2% of all candidates) of irregularities which are currently under review. This experience lead to a revision of the Rules of Assessment Participation and other administrative improvements.

Field Trial Feedback: At the end of the Field Trial, we collected feedback from candidates (n=92), program faculty (n=15) and scorers (n=17) about the tasks, instructions, rubrics and process. We also asked scorers for detailed feedback on the instructions and rubrics that would improve scoring. Finally, we compiled our own analysis of the task instructions and rubrics, based on candidate and scorer questions raised throughout the process and our content analysis of the work products submitted. With this information, we revised the instructions and rubrics, which were reviewed by the design team and content validity team. The primary changes that we applied to the Handbook and Field Guide were to:

  • Reduce redundancy in the work products
  • Clarify instructions
  • Clarify level differentiation on some rubric indicators

We added general instructions to the Candidate Assessment Handbook as well, which:

  • Provide clarification about completing the tasks, in which candidates should:
    • - Clarify the candidate's role in describing and analyzing the work completed for each assessment task.
    • - Blind the submission to remove identifying information
    • - Adhere to the length requirements for the work products
    • - Use feedback evidence, not just state a summary conclusion
  • Emphasize basic guidelines for preparing artifacts and required documents, and writing a commentary about their leadership skills in performing the tasks.

We also added clarification of the conditions under which a candidate's work would be returned without scoring, for resubmission (fee will be incurred). These are, by task, shown below:

Conditions for Resubmission:Tasks
1234
Work is not blindedxxxx
Does not address an academic priority areaxxxx
Does not include all supporting documentsxxxx
Videos are not of the appropriate length and quality  x 
Does not describe a working group and does not include family or community input   x
A strategy was not implemented    x

Candidates are required to blind their work and will be given one opportunity to revise and resubmit (without a new fee) work that was not sufficiently blinded.

Field Trial Results: The results showed that the tasks differentiate candidate performance within and among the tasks.

PAL TaskNumber of rubricsNumber of indicatorsMean scores (out of 4.0)% level 3 or higher
Task 1362.873
Task 2363.086
Task 3482.989
Task 4362.668
Total number13262.8 

Demographic analysis of candidate performance by task shows little consistent difference by gender and race/ethnicity but a modest, somewhat consistent difference between preparation program candidates and administrative apprenticeship/internship/panel review candidates. This data was shared and discussed with the Bias-Review Committee that met on September 29, 2015 to ensure that the PAL was bias-free.

Demographics of PAL Field Trial CandidatesNumber%
Total416100
Non-program (administrative apprenticeship/panel review)7518
Program 34182
Female26664
Male15036
African American143
White30974
No preference6516

Preparing for PAL Program Year 2015-16
PAL is now moving into its first implementation year this fall. As we shift to implementation, we have updated and made new policy and structural changes to the assessment. The policy updates and changes include:

  • Establishing cut scores to qualify candidates for licensure recommendation, which is determined by the Commissioner based on the results from the field trial. The cut score will be implemented in November;
  • Clarifying policies related to cheating and plagiarism;
  • Creating conditions for candidate appeals; and
  • Establishing policies for fee payment for initial submissions and resubmissions.

The structural updates and changes include:

  • Adding a fee payment process for registration and submission of completed work;
  • Incorporating a means of reviewing candidate work for possible cheating or plagiarism;
  • Revising materials and guidelines to incorporate changes in the task instructions and rubrics;
  • Retraining scorers on the revised instructions and rubrics (underway in October); and
  • Recruiting and training new scorers (to begin in November).

The revised Candidate Assessment Handbook has already been distributed and will continue to be accessible to candidates and programs through a password protected portal.

The revised Administrative Field Guide will be available in early October along with revised confidentiality forms for Task 3.

There will be a meeting of leadership preparation programs on November 9th to review policies and expectations for PAL Program Year 2015-16. This will include information on the cut score and score reporting processes. We welcome your questions and suggestions. In the meantime, please encourage your faculty and school and district leaders to sign up for scorer training to ensure a large and active pool of certified scorers.



Last Updated: October 1, 2015
E-mail this page| Print View| Print Pdf  
Massachusetts Department of Elementary and Secondary Education Search·Public Records Requests · A-Z Site Index · Policies · Site Info · Contact ESE