This past week the owner of the Tesla electric car company got into a fight with a reporter for the New York Times over a somewhat negative article about his road test of the vehicle. To prove his point that the reporter had not conducted a fair test, the owner released all the telemetry data the car had collected during the trip.
Which might have been the end of things except that a writer for the Atlantic looked at the same data and came up with a different interpretation. And the Times own public editor weighed in with analysis looking at both sides and not necessarily supporting either of them.
Although I saw a little of this story pass by in my info stream, the larger point of all this didn’t really register until reading David Weinberger’s post yesterday.
But the data are not going to settle the hash. In fact, we already have the relevant numbers (er, probably) and yet we’re still arguing. Musk [Tesla owner] produced the numbers thinking that they’d bring us to accept his account. Greenfield [the Atlantic reporter] went through those numbers and gave us a different account. The commenters on Greenfield’s post are arguing yet more, sometimes casting new light on what the data mean. We’re not even close to done with this, because it turns out that facts mean less than we’d thought and do a far worse job of settling matters than we’d hoped.
Electronic data tracking on a car – where it went, how fast it got there – yields very straightforward numbers and, in this case, still produces different interpretations of the meaning of that information.
Now I’m sure the Tesla is a very complex piece of technology. But it’s not nearly as complicated as understanding and managing the growth and learning processes of a human being, especially kids in K12 schools.
However, using much less precise measuring systems than those in the car, we collect far fewer data points on each student here in the overly-large school district during each year.
We then accept those numbers as a complete and accurate representation of what a student has learned and where they need to go. That very narrow information stream also leads to even more narrow judgements on schools (success/failure) and now we’re starting to use the same flawed data to assess the quality of teachers.
In his post, Weinberger is celebrating the open and public way in which the dispute between Tesla and the Times is being played out, with many different parties lending their voice to the discussion of how to interpret the data.
How often do we ask even the subjects of our testing to analyze the data we’ve gathered from them? Why are then not included in the development of the assessment instruments? When do we include at least a few of the thousands of other factors that affect student learning in our interpretations?
I’ve ranted before in this space about the increasing amount of resources being poured into data collection and analysis here in the overly-large school district (and elsewhere). But it’s the absolutist approach to the analysis of those numbers that may be an even larger disservice to our students than wasting their time.