Really Bad Vision

This is probably one of the most depressing ideas I've seen in a while. The Gates Foundation wants to spend up to $6 million to develop “literacy courseware”.

More specifically, it plans to use that small piece of Bill's pocket change “to entice publishers, developers, and entrepreneurs to propose the most innovative digital solutions for engaging, personalized software that helps students with reading and writing”.

Notice what's missing from that enticement list? No mention of educators.

The request for proposal says this is part of the Foundation's “vision” for education, something they call “personalized learning”.

My vision of their vision looks more like this:


Still Not Much Value Added

Well, that didn’t take long.

Last month preliminary results from a study sponsored by the Gates Foundation were released that seemed to support the validity of using a “value added” system to evaluate teacher quality.

This month, someone else looking at the same data came to a very different conclusion.

But Economics Professor Jesse Rothstein at the University of California at Berkeley reviewed the Kane-Cantrell report and said that the analyses in it served to “undermine rather than validate” value-added-based measures of teacher evaluation.

“In other words,” he said in a statement, “teacher evaluations based on observed state test outcomes are only slightly better than coin tosses at identifying teachers whose students perform unusually well or badly on assessments of conceptual understanding. This result, underplayed in the MET report, reinforces a number of serious concerns that have been raised about the use of VAMs for teacher evaluations.”

“A teacher who focuses on important, demanding skills and knowledge that are not tested may be misidentified as ineffective, while a fairly weak teacher who narrows her focus to the state test may be erroneously praised as effective.”

So, who’s right? This being education research, it probably means neither and both.

However, there’s one question that never seems to be addressed in all this research about teacher quality.

Are constant waves of standardized, mostly multiple choice exams – accompanied by a narrow school focus on test prep – the best way to improve student learning?

I’m gonna vote no.


Not Much Value Added

Preliminary results from a “$45 million study of teacher effectiveness” finds that “growth in annual student test scores is a reliable sign of a good teacher”.

The central finding indicates that teachers with “value-added” ratings are able to replicate that feat in multiple classrooms and in multiple years.

Other findings suggest that teachers with high “value-added” ratings are able to help students understand math concepts or demonstrate reading comprehension through writing.

The final report isn’t due for about a year but this small glimpse offers two major reasons to seriously question the research.

One, is that the study was paid for by the Gates Foundation, “a prominent advocate of data-driven analysis”.

And two, the study rests on a foundation of state standardized tests that produce an extremely narrow view of student learning.

Which would be fine if we want kids who read at a minimal level and are adept at performing basic arithmetic algorithms.