Challenging Clickbait

Last week the education RSS feed from the Washington Post was spammed with at least seven stories about Jay Mathews’ “challenge” index. Of course, they were all written by Mathews, who never misses an opportunity to tell you how he created this annual list of the “most challenging” high schools in the US.

So, these posts were not so much news as general clickbait.

In one of the articles, Mathews lets us know that this year is the 30 anniversary of the day this idea first popped into his head. Next year will be the 20th year since the Post and a then paper-based Newsweek magazine first published his list.

And I’ve been ranting about it in this space for almost three-quarters of that time. So I’m not sure what’s left to be said about this simplistic, headline-grabbing, mess. But I’ll say it anyway.

For those not familiar with the “challenge” index, here’s how it works: for each high school that will send him the stats,1 Mathews adds up the number of Advanced Placement (AP) and International Baccalaureate (IB) tests taken and divides it by the number of seniors who graduate. Any school with a score of 1 or higher goes on the list.

What? You were expecting more? Maybe like incorporating the number of students who actually passed the exams? Or other factors that go into making a successful high school beyond pushing kids to take more tests?

Mathews seems to think that his index has improved American education by pushing more schools into adopting the AP curriculum (after complaints a few years back, he grudingly included the IB program). Which assumes that those very limited programs, largely dictated by colleges and framed around the idea that college is the only goal of learning in K12, are appropriate for every student. It also ties nicely into Mathews’ love of charter schools, especially KIPP, many of which tightly embrace AP.

Then there’s the general idea in the public mind that this is a ranked list of the “best” high schools. I know, both Mathews and the Post will say that’s not the intent. They simply want to spotlight the schools that are “working hardest to challenge students from all backgrounds”.

However, that’s not how it works in the real world. Since the start, schools, especially those in the upper levels of the list, local media, public school critics, and others have trumpeted this “challenge” index as THE list of top US high schools.

For the Post, that also helps sell newspapers and magazines, and in the internet age, generates clicks.

The Case Against STEM

Listen to an education reformer for more than five minutes and you’re likely to hear about STEM, science, technology, engineering, and math. Students, we are told, must study more of these topics, otherwise they will be unable to compete in the world and our economy is doomed. Or something like that.

However, a columnist for the Washington Post says that our obsession with STEM education is not only based on a “fundamental misreading of the facts”, it “puts America on a dangerously narrow path for the future”.

Innovation is not simply a technical matter but rather one of understanding how people and societies work, what they need and want. America will not dominate the 21st century by making cheaper computer chips but instead by constantly reimagining how computers and other new technologies interact with human beings.

The current overemphasis on STEM is largely related to standardized tests, the core of most ed reform efforts. US students generally score behind many other countries on one particular international testing program, “trailing nations such as the Czech Republic, Poland, Slovenia and Estonia”. STEM advocates declare that our students must be immersed in math and science in order to return the country to the top of the world heap, where we belong.

Except that the US has never been at the top of that particular world heap.

In truth, though, the United States has never done well on international tests, and they are not good predictors of our national success. Since 1964, when the first such exam was administered to 13-year-olds in 12 countries, America has lagged behind its peers, rarely rising above the middle of the pack and doing particularly poorly in science and math. And yet over these past five decades, that same laggard country has dominated the world of science, technology, research and innovation.

Then there’s the matter that even the companies and organizations considered most innovative want their employees to come with “skills far beyond the offerings of a narrow STEM curriculum”.

Finally, the writer makes the case that a broad-based, liberal education – one that includes science and math in balance – would be better for both students and the country.

This doesn’t in any way detract from the need for training in technology, but it does suggest that as we work with computers, the most valuable skills will be the ones that are uniquely human, that computers cannot quite figure out – yet. And for those jobs, and that life, you could not do better than to follow your passion, engage with a breadth of material in both science and the humanities, and perhaps above all, study the human condition.

Collecting Dots

In a recent, very short post, Seth Godin observes that it’s very easy to collect dots but not so easy to make some meaning from them.  Of course, in his analogy dots are data, and learning to connect them in meaningful ways takes a lot of work.

Here in our overly-large school district (and elsewhere I’m sure), teachers are spending an increasing amount of class time collecting dots, but what happens after that?

Why then, do we spend so much time collecting dots instead? More facts, more tests, more need for data, even when we have no clue (and no practice) in doing anything with it.

And there’s one of the big problems with obsessing over data. It’s useless, and potentially harmful, unless someone has the ability to make meaning from it, skills that Godin says are “rare, prized and valuable”.

However, as with so many other parts of the American education system, we expect every teacher to either come to the table understanding how to connect dots, or learn it in their spare time.

Dots that represent some very complex and highly variable data, kids and their learning.

Slightly off topic: a good way to look at dots/data by the wonderful cartoonist (and visual philosopher) Hugh MacLeod.

Interpreting the Data

This past week the owner of the Tesla electric car company got into a fight with a reporter for the New York Times over a somewhat negative article about his road test of the vehicle. To prove his point that the reporter had not conducted a fair test, the owner released all the telemetry data the car had collected during the trip.

Which might have been the end of things except that a writer for the Atlantic looked at the same data and came up with a different interpretation. And the Times own public editor weighed in with analysis looking at both sides and not necessarily supporting either of them.

Although I saw a little of this story pass by in my info stream, the larger point of all this didn’t really register until reading David Weinberger’s post yesterday.

But the data are not going to settle the hash. In fact, we already have the relevant numbers (er, probably) and yet we’re still arguing. Musk [Tesla owner] produced the numbers thinking that they’d bring us to accept his account. Greenfield [the Atlantic reporter] went through those numbers and gave us a different account. The commenters on Greenfield’s post are arguing yet more, sometimes casting new light on what the data mean. We’re not even close to done with this, because it turns out that facts mean less than we’d thought and do a far worse job of settling matters than we’d hoped.

Electronic data tracking on a car – where it went, how fast it got there – yields very straightforward numbers and, in this case, still produces different interpretations of the meaning of that information.

Now I’m sure the Tesla is a very complex piece of technology. But it’s not nearly as complicated as understanding and managing the growth and learning processes of a human being, especially kids in K12 schools.

However, using much less precise measuring systems than those in the car, we collect far fewer data points on each student here in the overly-large school district during each year.

We then accept those numbers as a complete and accurate representation of what a student has learned and where they need to go. That very narrow information stream also leads to even more narrow judgements on schools (success/failure) and now we’re starting to use the same flawed data to assess the quality of teachers.

In his post, Weinberger is celebrating the open and public way in which the dispute between Tesla and the Times is being played out, with many different parties lending their voice to the discussion of how to interpret the data.

How often do we ask even the subjects of our testing to analyze the data we’ve gathered from them? Why are then not included in the development of the assessment instruments? When do we include at least a few of the thousands of other factors that affect student learning in our interpretations?

I’ve ranted before in this space about the increasing amount of resources being poured into data collection and analysis here in the overly-large school district (and elsewhere). But it’s the absolutist approach to the analysis of those numbers that may be an even larger disservice to our students than wasting their time.