Well, that didn’t take long.
Last month preliminary results from a study sponsored by the Gates Foundation were released that seemed to support the validity of using a “value added” system to evaluate teacher quality.
This month, someone else looking at the same data came to a very different conclusion.
But Economics Professor Jesse Rothstein at the University of California at Berkeley reviewed the Kane-Cantrell report and said that the analyses in it served to “undermine rather than validate” value-added-based measures of teacher evaluation.
“In other words,” he said in a statement, “teacher evaluations based on observed state test outcomes are only slightly better than coin tosses at identifying teachers whose students perform unusually well or badly on assessments of conceptual understanding. This result, underplayed in the MET report, reinforces a number of serious concerns that have been raised about the use of VAMs for teacher evaluations.”
“A teacher who focuses on important, demanding skills and knowledge that are not tested may be misidentified as ineffective, while a fairly weak teacher who narrows her focus to the state test may be erroneously praised as effective.”
So, who’s right? This being education research, it probably means neither and both.
However, there’s one question that never seems to be addressed in all this research about teacher quality.
Are constant waves of standardized, mostly multiple choice exams – accompanied by a narrow school focus on test prep – the best way to improve student learning?
I’m gonna vote no.
Â