Interpreting the Data

This past week the owner of the Tesla electric car company got into a fight with a reporter for the New York Times over a somewhat negative article about his road test of the vehicle. To prove his point that the reporter had not conducted a fair test, the owner released all the telemetry data the car had collected during the trip.

Which might have been the end of things except that a writer for the Atlantic looked at the same data and came up with a different interpretation. And the Times own public editor weighed in with analysis looking at both sides and not necessarily supporting either of them.

Although I saw a little of this story pass by in my info stream, the larger point of all this didn’t really register until reading David Weinberger’s post yesterday.

But the data are not going to settle the hash. In fact, we already have the relevant numbers (er, probably) and yet we’re still arguing. Musk [Tesla owner] produced the numbers thinking that they’d bring us to accept his account. Greenfield [the Atlantic reporter] went through those numbers and gave us a different account. The commenters on Greenfield’s post are arguing yet more, sometimes casting new light on what the data mean. We’re not even close to done with this, because it turns out that facts mean less than we’d thought and do a far worse job of settling matters than we’d hoped.

Electronic data tracking on a car – where it went, how fast it got there – yields very straightforward numbers and, in this case, still produces different interpretations of the meaning of that information.

Now I’m sure the Tesla is a very complex piece of technology. But it’s not nearly as complicated as understanding and managing the growth and learning processes of a human being, especially kids in K12 schools.

However, using much less precise measuring systems than those in the car, we collect far fewer data points on each student here in the overly-large school district during each year.

We then accept those numbers as a complete and accurate representation of what a student has learned and where they need to go. That very narrow information stream also leads to even more narrow judgements on schools (success/failure) and now we’re starting to use the same flawed data to assess the quality of teachers.

In his post, Weinberger is celebrating the open and public way in which the dispute between Tesla and the Times is being played out, with many different parties lending their voice to the discussion of how to interpret the data.

How often do we ask even the subjects of our testing to analyze the data we’ve gathered from them? Why are then not included in the development of the assessment instruments? When do we include at least a few of the thousands of other factors that affect student learning in our interpretations?

I’ve ranted before in this space about the increasing amount of resources being poured into data collection and analysis here in the overly-large school district (and elsewhere). But it’s the absolutist approach to the analysis of those numbers that may be an even larger disservice to our students than wasting their time.

Comments

  1. says

    Hello, Tim,

    RE: “We then accept those numbers as a complete and accurate representation of what a student has learned and where they need to go.”

    I’ve been struggling with this as well, both in the context of what data actually means for schools, and how data is used to drive what some people are calling “personalized learning” – IMO, a better name for the current flavor of personalized learning is algorithmically mediated learning, but that’s a separate conversation. I actually posted on that on the FM blog yesterday – I was actually unaware of the Tesla/Musk/NY Times flap until this morning.

    I would love to see both students and teachers have a voice in looking at their data, and analyzing the various narratives that could be suggested by that data. If teachers were given access to their data, and had the ability to get more familiar with stats and analysis, what types of action research could arise from that.

    In today’s climate, the use of data feels more like a threat, and like you and others have pointed out, data has an unfortunate resemblance to fact. There’s a lot of good that can come out of an intelligent use of data – complete with an awareness of the limitations of what data can tell us – but the most common use of data I see today is as a tool to support a preconceived notion.

    What are your thoughts on how we could get a student voice into the mix on the connections – or lack thereof – of the data being collected about students, and the learning and growth that matters to the students. I suspect that there is a large gap between what the data measures and what the learners value; even if I’m completely wrong, it would be an interesting conversation.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>