Are Teachers Still Widgets? Five years after the Widget Report

Reading Time: 4 minutes
Tim Daly, President of TNTP
Tim Daly, President of TNTP

Five years ago, TNTP (formerly known as The New Teacher Project) released The Widget Effect, a report that influenced the debate about educAre Teachers Still Widgets? Five yearsator workforce policy-making and generated advocacy for policies such as improved teacher performance evaluations, administrator training, and performance-based pay.

According to TNTP, the report identified a widespread problem in schools across the country – “near total failure to acknowledge differences in teacher effectiveness”. The study examined twelve school districts in four states and found that almost all teachers were rated “good” or “great” on their formal performance evaluations, and that very few teachers were given “poor” ratings. These researchers reported that teacher performance played an insignificant role in hiring decisions, compensation, and professional development. In the twelve districts studied, a teacher’s performance only became an issue when it was so problematic that dismissal was considered, yet findings revealed that dismissals rarely occurred.[1] 

This month, TNTP released a week-long blog series discussing what has been learned about teacher evaluation over the last five years.

The Widget Effect at Five: Where Are We Now? Tim Daly, President of TNTP, introduces the blog series by explaining the “widget effect” phenomenon, reviewing the report’s findings, sharing recommendations, and identifying some of the changes that have occurred since the release of the report.

School Leaders on Teacher Evaluation: Two school administrators describe the changes to evaluation processes in their schools, teachers’ reactions to new evaluation processes, strategies to engage stakeholders, and how they address challenges.

Teaching Before and After IMPACT: A classroom teacher shares her impressions about the roll out of the IMPACT evaluation system in the District of Columbia Public Schools and how it improved her teaching.

Where Evaluation Policy Stands: This blog features the opinion of policy thought leaders about the future of evaluation reform.

4 Things We’ve Learned Since the Widget Effect, Tim Daly reflects on what is currently working to improve teacher evaluations, as well as what is needed to establish systems and processes to ensure that students receive high quality instruction from effective teachers.

An important insight presented in the final blog, in my opinion, is Daly’s reflection that, “Implementation matters more than design.” According to the author, “Many states and districts have designed new teacher evaluation systems over the last five years, but not nearly as many have fully implemented them, or implemented them well (at least so far).”[2]  Federal, state, and district policy reforms have resulted in the design and installation of new evaluation systems to improve teacher observations and other evaluation system components. Observation practices are cited as just one example of the gap between design and implementation.  Daly contends that the strategies being used to improve evaluation systems aren’t enough to accomplish the goals of this reform agenda, and even with more frequent observations and improved observation rubrics observers in most schools still do not conduct evaluations that are both accurate and rigorous.

Daly recognizes that full implementation of this reform requires a “sea change in how everyone involved in our public schools thinks about and manages the quality of instruction” and that just launching a new rubric or distributing information from the central office will not yield implementation.  TNTP’s analysis of lessons learned suggests that school systems consider implementation of a new evaluation system in only two areas, training administrators and explaining the new system to teachers.

It should not be a surprise that an initiative this complex and politically charged would struggle with implementation issues. Scaling up an effort that has the scope of evaluation reforms in multiple states and thousands of school districts is a heavy lift. A lot is known about what contributes to full implementation and what causes initiatives to stall or be implemented in ways that do not accomplish the intent of the program.  West Wind Education Policy has been engaged in an ongoing study of implementation practices with the experts from the National Implementation Research Network (NIRN).  NIRN has extensive experience with supporting the work of education and other human service agencies. NIRN has provided ongoing leadership to several state education agencies through the State Implementation and Scaling up of Evidence-based Practices (SISEP) Center and has enabled states to advance purposeful implementation with several different evidence-based initiatives.  West Wind staff members have supported states agencies and other organizations to help build the capacity of states and the infrastructures needed to support systems-wide educator workforce reforms.

Recognizing that implementation is important, TNTP advocates for comprehensive actions beyond administrator training and informing teachers. Their recommendations include:

  • Constant follow up by the parties responsible for operating schools;
  • Real-time access to evaluation data and the ability to make course corrections throughout the school year to ensure fairness and accuracy; and
  • Attention to the complexities of changing culture to address any anxiety and resistance.

These TNTP recommendations to attend to implementation seem reasonable, but not enough to bring about the complex changes needed in organizations and systems that have a long history of using evaluation practices that fail to bring about improved teaching practices. Reformers would benefit from accessing the active implementation frameworks developed by NIRN. Frameworks include:

  • Usable intervention criteria – description, essential components, operational definitions, and fidelity assessments related to program outcomes;
  • Implementation stages – exploration, installation, initial implementation, and full implementation;
  • Implementation drivers – competency, organization, leadership, and integration;
  • Improvement cycles – Plan/Do/Study/Act, usability testing, practice-policy communication; and
  • Implementation teams – expertise in best practices in implementation, sustainability of interventions, organization, and system change[3]

We recognize from our work with states that SEA leaders and agencies providing technical assistance to practitioners are doing the best they can to get complicated processes in place.  Most likely, some organizations are using implementation best practices to go to scale with teacher performance evaluation systems. TNTP’s blog-series points out some important challenges and offers relevant recommendations for future work.  Advocating for more attention to implementation at the five-year marker is a good place to start.

Read the entire five-part series to learn more about the status teacher evaluation reforms five years after the release of the Widget Report.

[1]  Weisberg, D., Sexton, S., Mulhern, J., & Keeling, D. (2009). The widget effect: Our national failure to acknowledge and act on differences in teacher effectiveness. New York: The New Teacher Project.  Retrieved from http://tntp.org/assets/documents/TheWidgetEffect_2nd_ed.pdf

[2] Daly, Tim. October 10, 2014. 4 Things We’ve Learned Since The Widget Effect. TNTP reimagine teaching. Retrieved from http://tntp.org/blog/post/4-big-things-weve-learned-about-teacher-evaluation-since-the-widget-effect

[3] Fixen, Dean.  Our Approach. The National Implementation Research Network, FPG Child Development Institute, University of North Carolina, Chapel Hill.

 

Theme: Overlay by Kaira