What Does a Strategic Evaluation System Include?

For districts to be able to make decisions based on evaluation data, they need a strategic evaluation system that fosters differentiation among evaluation outcomes. T-TESS offers a strong foundation that districts can build from to extend the impact of their evaluation systems. T-TESS is comprised of three components:

  1. Goal-setting and professional development plan;
  2. The evaluation cycle (including pre-conference, observation and post-conference); and
  3. A student growth measure that is not required to be an objective student assessment.

Components (1) and (2) comprise 80% of the evaluation, while the student growth measure represents the remaining 20%. Embedded within component (1) and (2) is a detailed and comprehensive T-TESS rubric comprised of four domains: (1) Planning, (2) Instruction, (3) Learning Environment, and (4) Professional Practices and Responsibilities.

With the addition of the key components outlined below, school districts will be better equipped to make informed strategic human capital decisions.

What this is:

It is the use of multiple assessments that can be used to measure both (i) absolute student achievement and (ii) student achievement growth, either during the year and/or year-over-year. Assessments could include state standardized assessments, Measures of Academic Progress (MAP), I-Station, ITBS, or any other standard assessment used district-wide. The assessments must go through a district process to ensure the validity and reliability of the testing instrument.

How incorporating this strengthens T-TESS:

It requires a student growth measure that quantifies the extent to which a teacher contributes to his/her students’ learning over the course of the year.

What this is:

It is a combination of informal coaching and formal observations conducted by a principal, assistant principal or instructional leader at the school, based on a rubric of educator and teacher behaviors. Observers are trained or normed to accurately and consistently rate teacher practice according to the rubric.

How incorporating this strengthens T-TESS:

It includes an inter-rater reliability requirement to ensure evaluators are fairly and accurately rating teachers, such that observers observing the same teacher give him or her the same ratings on the observation rubric.

What this is:

This is a way to capture student voice through a perception survey administered to students in grades 3-12. This should be a research-based survey that captures students’ feedback about their classroom experience.

How incorporating this strengthens T-TESS:

T-TESS does not include student survey data.

*Evaluation component weights should be adjusted for teacher type (i.e., a second-grade teacher will not have a student perception survey, so the other components weights will be adjusted accordingly).

For additional details on how other districts and states weight the components see the following resource.

For more details and the rationale for each of these components, please read below. For more details on implementing these components, see the Implementation Considerations section.

Student Achievement

Districts should include a student growth measure, in addition to a measure on absolute student performance, because it provides a richer, more comprehensive picture of student learning. A measure of absolute student performance indicates the performance of a student at one point in time while a student growth measure indicates the progress of a student over time. There are different types of student growth measures that provide different types of information. Below you will find details on three of the most common student growth measures. The first two measures are objective measures based on common assessments while the last is a subjective measure, dependent on teacher and principal judgement:

  1. Student growth percentiles (SGPs): this is a measure that uses a student’s past performance to determine a student’s current performance compared to the student’s peers.
  2. Value-added model (VAM): this is a measure that determines the impact of an educator or school on student learning and controls for factors outside of a teacher’s control that influence student achievement.
  3. Student learning objectives (SLOs): this is a measure of student progress based on student growth goals set by teachers.

For additional details on how other districts and states weight the student achievement component see the following resource.

For details on additional student growth measures, such as growth tables, see the Growth Data: It Matters and It's Complicated by Data Quality Campaigns.

Student Growth Percentiles

What it is: Student growth percentiles (SGPs) show how a student’s achievement at the end of the year compares to other students who started at the same level at the beginning of the year.

What it does: SGPs indicate progress in terms that are familiar to teachers and parents. Typically, a teacher will be evaluated based on the median growth percentile (MGP), which is useful because it is not drastically altered by one or two students performing exceptionally well or low.

What it doesn't do: SGPs do not account for factors outside of test scores that may contribute to student learning. Additionally, the measure does not provide any information on student achievement relative to grade-level standards or account for variations in students or classes.

For additional information, please see the RAND report on student growth percentiles.

Value-Added Modeling

What it is: A value-added model (VAM) measures the impact a teacher has on student achievement regardless of other factors that impact achievement (such as past schooling, influence of peers, and family environment). VAM uses statistical methods to account for a student's prior academic achievement and predict what his or her academic achievement should be at the end of the year. From this, how much higher or lower a student performs can be attributed to the added value of the teacher. The difference between the predicted score of each student and actual score is averaged to determine a teachers VAM.

What it does: This measure provides an apples-to-apples comparison across teachers based on a student’s improvement over time. The measure allows for relative judgment and educators can be compared with one another.

What it doesn't do: VAM does not does not provide any information on student achievement relative to grade-level standard and is not an absolute indicator of effectiveness. Additionally, value-added modeling can be challenging to communicate and requires purposeful communication and training.


VAM can be challenging to implement because of the complex scoring and negative reactions to the model over the past few years.

Districts who are interested in using VAM should ensure they appropriately train teachers to understand the measure and give educators time to familiarize themselves with the model. For example, Lubbock ISD collected VAM data for a few years prior to linking scores to compensation data; this allowed educators to familiarize themselves and learn about the scores before the measure impacted their pay.

Student Learning Objectives

What it is: Student learning objectives (SLOs) are planned goals for what students will learn over a given period of time. A teacher sets goals based on the foundational student skills developed through their curriculum. Once the goal is set, teachers undergo a process of monitoring progress, evaluating success, reflecting and revising instruction.

What it does: SLOs provide a measure of student growth based on goals established by educators, which helps educators understand the impact of their practice and refine their instruction. Educators can use standardized assessments or end of performance exams; this allows for non-tested grades and subject to also set goals.

What it doesn't do: SLOs do not allow for objective comparison across students or classrooms because goals are set subjectively. Additionally, setting rigorous SLOs can be challenging so educators should be provided with training, rubrics and examples.

For additional information on using SLOs, see the T-TESS website and guidance provided by the Reform Support Network.

Administrator Observation

A critical piece of a strategic evaluation system is understanding teacher performance through administrator observation. Teacher performance is defined with a rubric that details the teacher and student behaviors of excellent teachers and the performance along the continuum for each indicator. To ensure that observations are accurate and fair, administrators must be normed and consistent in their interpretations of teacher practice against the observation rubric. This consistency is called inter-rater reliability and it must be maintained through ongoing administrator training such as norming on videos of teacher lessons and through instructional rounds.

A strategic evaluation system should include a minimum of two informal coaching observations and one formal summative observation per semester. This can be reduced to one coaching observation and one summative observation per year if a teacher received a proficient rating or higher within the past two years.

For additional details on how other districts and states weight the observation component see the following resource.

Student Perception Survey

In addition to student growth and observations, student perception surveys are an important component of strategic evaluation system. The Measures of Effective Teaching (MET) Project found student surveys of teacher performance had a higher correlation with a teacher’s academic success with students than classroom observations. The research found student surveys not only provided an accurate picture of teacher performance that confirmed the results of observations and student assessment results, but also provided helpful feedback for teachers to use to improve their instructional practice. While there are many providers of student perception surveys, two national providers are used often by districts who have implemented student surveys: Tripod and Panorama.

For additional details on how other districts and states weight the student voice component see the following resource.

What are the Pathways to Strengthening a Multiple Measure Evaluation System?

If you are considering strengthening your district's evaluation system, consider these reflection questions to pinpoint which aspect of your system needs the greatest attention.

There are two pathways for strengthening a district's evaluation system:

  1. Strengthening an existing multiple measure evaluation system
  2. Implementing a new strategic evaluation system

Pathway 1: Strengthen T-TESS

T-TESS has three required components: (1) goal-setting and professional development plan, (2) teacher observation, and (3) a student growth measure, all factoring in to defining the effectiveness of an educator being evaluated under this system. On the surface, these three components are in line with best practices across the country and the tools/rubrics associated with them are thoughtfully developed. However, the implementation of these components, and the allowable variance of metrics within each component, have resulted in a lack of true differentiation of teacher effectiveness. Recent data from TEA shows that more than 75% of educators are rated proficient or greater under the system, and in a cohort of 8 ISDs supported by Best in Class in this work, 89% of teachers are rated proficient or greater.

There are two primary implementation areas within T-TESS that cause a lack of differentiation in teacher effectiveness: (1) the student growth component does not require an objective form of student growth/performance (2) administrator observation scores are not well calibrated and are generally inflated. Additionally, T-TESS does not require a student voice (survey) component, which Texas and national research have proven to be highly relational to student outcomes.

Districts who are interested in having greater differentiation in their teacher effectiveness, but do not want to go through the process of creating a new evaluation system, can strengthen their existing system by focusing on the following three areas:

  1. Require the student growth component to be an objective measure of student performance that is based on a locally chosen assessment;
  2. Implement systems and processes to ensure principals are appropriately trained and calibrated on the T-TESS rubric and domain, while creating an inter-rater reliability safeguard built into the administrator's evaluation which would remove subjectivity from observational rounds;
  3. Create a new component that is focused on student voice by implementing a student survey of teacher effectiveness in grades 3-12.

Benefits and Challenges: There are benefits and challenges any district will experience when pursuing the route of strengthening T-TESS as it currently exists:


Familiarity with existing rubric – With the T-TESS rubric staying in place, teachers/administrators will not have to learn new metrics and will have a clear understanding of what they are being evaluated on.

Technology and other systems (human capital, recruitment, payroll) already in place – the back-end systems to support an evaluation are often overlooked, but they play an integral part in the ability of a system to be efficient, user-friendly and cost-effective – and enhancement of T-TESS would likely require little change to existing systems.

Evaluator certification process already in place – it is important for the district to have a process to deem when an administrator is able to evaluate a teacher, and this function already exists in T-TESS.


Strong messaging around WHY - it will be imperative for districts to "sense-make" on why making changes to the existing system is best for the district and for those educators. This is always a challenge when changing an evaluation system and is a critical component for successful development and implementation.

Teacher rating may shift downward – districts who strengthen T-TESS and focus on more rigorous principal calibration for observations and objective student data, are likely to see ratings shift downward for teachers, which is more reflective of true performance. This shift will have to managed and communicated clearly, while offering targeted professional development so all can reach proficiency levels.

When this pathway is the right choice: Strengthening T-TESS is the right choice for a district if there is a belief that the components of T-TESS are aligned to the focuses of the district, and the tools created are solid, yet implementation of the components is resulting in an inaccurate distribution of teacher effectiveness. There is still a significant amount of work and engagement to implement this option with fidelity, but the “shock” to the system of making this change is not as dramatic as it would be to start completely from scratch.

To understand the process for strengthening an existing evaluation system, see the Project Conception to Launch.

Pathway 2: Develop a New Locally Approved Evaluation System

There are a handful of districts who opt not to use the state T-TESS evaluation system. They have developed their own system in-house and have had it approved by TEA. This pathway requires a district to meet state the minimum requirements as outlined in T-TESS, but provides the flexibility to craft a system that is unique to that district, which may allow the district to take advantage of opportunities or address challenges the current system is simply not nimble enough to do.

Districts choosing this pathway have many more decision points ahead of them, as well as required types of engagements (district action committees, local board of trustees, etc.), and since the system would not be created by the state, each of the tools/rubrics would need to be developed. Furthermore, the supporting technological systems need to be a focus, as they must support the new evaluation system in place. The time and resources required to develop a new evaluation system are significant and not available to most districts. Districts who created new strategic evaluation system were supported through various channels: new leadership, internal teams dedicated to the work, external contractors and grant funding.

Benefits and Challenges: There are benefits and challenges a district will experience when pursuing the route of developing a new evaluation system.


Clarity of District Vision – developing a new system allows a district to clearly define their vision of excellence in teaching and leadership while also clearly articulating a district’s theory of change about educator development and retention.

Chance to reset expectations of performance – With a new system and new accompanying tools/rubrics, a district can start fresh on how varying levels of teacher performance manifest themselves in the classroom. Also, planning calibration training may be easier since each administrator is starting from the same place.


Strong messaging around WHY – there are not many districts in Texas with a local evaluation system, so it is imperative the district have a clear communication plan for why this is important for the district, and how it will benefit the educators and students.

Timeline – developing a new evaluation system takes time and involves nearly every department within a school district.

Requires school board approval – new systems require passage by the Board of Trustees, which adds in a layer of complexity since elected trustees are political positions that could change over the course of the systems development.

Develop technology/systems to support - the supporting technological systems need to be a focus, as they must support the new evaluation system in place and be able to interface with existing systems that run in parallel.

Develop evaluator certification process – within the creation of an evaluation system, a district would also have to create a process/protocol for certifying that administrators are qualified to evaluate teachers under the new system – without this component, confidence in the system from teachers would be extremely low.

Develop training for new system – As with any new system or process created by a district, socializing that system with staff and training users on its functionality is critical. This is also not a short process, but one that is paramount to successful implementation and understanding.

When this pathway is the right choice: Creating a new strategic evaluation system is a heavy lift, but one that would be in the best interest of school districts who do not believe that their vision for excellence can be clearly articulated through the existing state system (and without the enhancements mentioned in the “strengthening pathway”) and who want to make a strong statement to their community and educators about how they are preparing to define excellence in the classroom. Furthermore, if a district is planning to make wholesale changes to other foundational systems, there is an opportunity to align a new evaluation system to those other systems changes.

To understand the process for creating and implementing a new evaluation system, see the Project Conception to Launch.