The Wrong Way to Evaluate Schools


It’s May. Time for spring flowers, Mother’s Day, summer vacation planning, and the infliction of standardized tests on most kids in this country.

Along with the time-honored (unfortunate) tradition of posting bogus “best of” high school rankings.

First out of the gate, the lists from US News and World Report, a publication which hasn’t been relevant since the previous century. If even then.

So, how did their editors assemble these definitive lists? What factors do they consider most important in determining which high schools are “best”?

The methodology for identifying the top-ranked Best High Schools was developed with a core principle in mind: The top schools must serve all students well and must produce measurable academic outcomes that support this mission.

Except that those “academic outcomes” are overwhelmingly based on standardized tests, which certainly do not “serve all students well”. Totaling 80% of a school’s score.

Thirty percent is derived from the number of 12th-graders who took at least one AP or IB course and who received a “qualifying score” on those tests. Another 10% comes from the number of seniors who had taken more four or more of those classes.

Of course, this approach assumes that most students are planning to attend college, that college is the best choice for all students, and that the AP/IB curriculums (with a standardized test that determine the score) best serve the students’ needs.

None of that is even close to being true for every student in any school. But rankings like this put pressure on states and schools to organize their programs around that idea, to the detriment of the kids who would be better served by something else.

Another 40% of US News’ data comes from the state standardized testing programs, with half of that based on the “difference between how students performed on state assessments and what U.S. News predicted based on a school’s student body”.

That “predicted” part is suspicious enough, but they seem to be ignoring the differences that exist between fifty-one (including DC) different testing systems. Testing programs in California and New York are very different from states like Mississippi and Missouri and thus not at all appropriate for any kind of comparison like this.

Finally, they toss in 10% for “underserved student performance”, which is very vaguely defined, and another 10% for “on-time” graduation rates.

There is much wrong with attempting to rank the “best” high schools in the country in the first place. But weighting standardized tests so heavily (or using them at all) means these lists can never be reflective of school quality.

Study after study has shown that standardized test scores, and thus the rankings that depend on them, correlate directly to the socio-economic background of the parents, not academic achievement or ability. You could have produced almost the same ranking using real estate prices instead.

As anyone who has taught for any length of time can tell you, schools are complicated places. Filled with students having many different interests and skill sets. Evaluating both the kids and the school overall is a messy process and not one that can be done without actually spending some time in each.

The best schools provide a wide variety of experiences for as many kids as possible and they create that environment in collaboration with students, parents, and the wider community. That kind of actual quality cannot be reflected in these “best of” lists.

Ok, next up this spring is probably Jay Mathews’ “Challenge” Index. Which is worse than this crap from US News. If that’s possible.

The photo is a sign posted along a desert trail in Tucson, Arizona. I didn’t see it the first time and went the wrong way anyway. Which probably says something about me. :)

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.