Thursday, February 9, 2012

What does "student growth data" mean, anyway?

Education, 2/10/12 8:00 am

House Full Committee, House Hearing Rm A, John L. O'Brien Building, Olympia, WA
Work Session: Use of Student Growth Data in Educator Evaluations.


Dear advocates,

So just what DOES “multiple measures of student growth” mean?

It means evidence that a student is learning. Not just a single test score, from a single day. It means how much a child learned between two points in time. It can include student work, unit assessments, benchmark exams and state assessments, even AP and SAT scores.
Why is this being discussed? Because our new evaluation system is focused on growth: Are the kids learning? Is this particular educator helping them learn? How well are they doing it?

The House Education Committee is holding a work session Friday 2/10 on “student growth data” that should be worth watching (you can view live or tune in later) on TVW. It will review work of the Teacher and Principal Evaluation Pilot. A TPEP taskforce just wrapped up work around student growth data, perception survey data and evaluator training and support. The TPEP steering committee will meet Monday and decide on recommendations, based on what the task force has to say. This in turn may influence the evaluation bills.

Currently the law leaves using student growth data as an option; but without using the data, you have a purely subjective evaluation. So one of the evaluation bills, SSB 5896, proposes requiring use of “multiple measures of student growth”. … Or in layman’s terms, something to show that a student is learning.

Government relations coordinator
Washington State PTA

COUNT DOWN TO FOCUS DAY: February 20, march and rally on the steps. REGISTER TODAY
Subscribe to Washington State PTA’s Action E-List


  1. To say that "student growth data" (from test scores) is objective is highly misleading. Research shows that teacher evaluations can be misclassified 40-60% of the time when based on student test scores. Such ratings ratings can vary from year to year, class to class, and from test to test. They even change if you use a different statistical model to analyze them. The use of standardized tests is no better than tossing a coin. This is both unfair and damaging to teachers and students.

    Even if other factors are considered alongside test scores, these other more "subjective" measures will fall by the wayside when people can just focus on a so-called "hard" number

    This, of course, does not mean we shouldn't strive for improving teacher evaluation and quality. For instance, by using proven tools, such as structured observation, analysis of students' work and constant feedback.

  2. Student growth data does not refer soley to test scores and it is misleading to cast it in that light.

    State assessments only measure some core academic areas (reading, writing, math and science) and in some years. The conversation today is around how to use the range of student growth data available responsibly, with integrity, to help all educators better understand where they are and are not succeeding, and ultimately to ensure all students have the opportunity to master the standards they need to transition into the working world or into a training or post-high school academic program.

    If they have a struggling teacher, they do not that opportunity. If they are in a school where educators do not get feedback and support, students may not get that opportunity.

    And while use of state data may not always be applicable to teacher evaluations, it is applicable to principal evaluations.
    The pilot sites are using a variety of data points; that is what state law calls for and what both evaluation bills currently being considered support. Multiple measures is key.

    You can learn more about the Teacher and Principle evaluation pilots here:

    You can learn more about recommendations from the TPEP task force on use of student growth data here:

    - Ramona Hattendorf, WSPTA gov't relations

  3. From what I can gather on the tpep website, according to 6686, student growth "means the change in student achievement between two points in time". This can include classroom, school, district, or state assessments. This seems to imply that such assessments will be some kind of standardized test. Members of the TPEP committee acknowledged that such tests are error prone. They also made remarks indicating there are still issues to be worked out about how to combine quantitative measures/test scores with other measures in an evaluation system (issues that are generally not resolved yet, even in theory). TPEP has been a great first step in improving teacher/principal evaluation. 5895 encourages this work to continue. 5896 pushes us in a direction we're not ready for and may not ever want to go.

  4. "Student growth data does not refer soley to test scores and it is misleading to cast it in that light."

    ...but that's exactly what it's going to end up being. Everything that Senator Tom has said leads inescapably to that conclusion. You can try to spin this as a way to help teachers all you want, but you don't have to be a weather vane to know which way the wind is blowing.

  5. What is also concerning is that none of the TPEP programs are claiming to use VAM, which is the specific statistical technique that is ostensibly _designed_ to estimate the teacher's contribution to student growth on test scores. Nevermind that VAM has its own major problems - it's at least designed to tell us a teacher's role in student growth. So, what we're promoting with the now passed evaluation bill is NOT even designed to infer a teacher's contribution to student growth. Effectively, this means we're just going to throw around numbers that look objective, but have NO statistically valid basis in reality. To consider such data when available within context as a hint for more in-depth evaluation might be reasonable. Mandating it as major part of evaluation is bad policy.

    Professor Bruce Baker goes into greater detail on this at