Can Someone Explain To Me Why We're Still Talking About Value Added Modeling?

2 min read

Even under the best of circumstances, using value-added modeling (or VAM) is not a reliable tool in identifying teacher effectiveness.

Recently, in Washington, DC - a leader in using VAM to inform hiring decisions and merit pay - the formula that attempts to quantify the added value was applied incorrectly. As a result, "nearly 10 percent of the teachers whose work is judged in part on annual city test results for their classrooms" were given inaccurate ratings. One of the affected teachers was fired based on the flawed results.

In the upcoming year, the new Common Core aligned tests are coming. The tests aren't complete yet, but as we've seen in the few places that have used Common Core aligned exams, pass rates on those exams has been driven as much by the politics of setting the cut rate than any real gains or losses by learners. As we approach the go-live date for the new tests, the likelihood of technological glitches marring the rollout of the new tests seems high - or in other words, scores on an untested set of tests are likely to be affected by the vagaries of refining a new evaluation instrument, the political calculations of determining who passes, and technology issues. And, as we saw in DC, sometimes the evaluators just screw it up and get the math wrong. Or, as we saw in Indiana, sometimes the formula gets adjusted for political reasons.

None of these issues have anything to do with student learning or teacher effectiveness, yet the results will still be used to assess student learning and teacher effectiveness in supporting that learning. But, against a backdrop of poorly conceived policy, weak implementations of the policy, and political manipulation of the results, it's getting increasingly difficult to see how anyone can claim that VAM effectively pulls adequate signal from noise.

, ,