This year it’s a major topic of discussion here in theÂ overly-large school district: how to acquire it, how to manage it, how analyze it. Â A drive for generating data that’s veering very close to being an obsession.
We’ve become very good at generating data. Â At least certain types.
In the schools I’ve visited over the past few months, I’ve seen students in an unhealthy number of classrooms involved with using pencils or trackpads to fill in the blanks on some kind of data-gathering vehicle.
Around here, standardized tests are not just reserved for May anymore. Â We now have an “assessment resource tool”, a big database of questions that year-round spits out tests, sucks in student responses, and lines up all that data ready for… what?
We also spend a lot of time on managing all that data, hours merging the locally generated stats with those from the many state and national exams into home-made spreadsheets and databases, to be sorted and queried and reported out in multiple variations.
Analyzing all those results, and determining what we should do about them, is an even more difficult problem.
So, what do all those numbers mean? Do they really tell us what students know about a particular subject (mostly reading and math, of course)?
I fully understand the need to regularly assess student learning, but is it possible to have too much data? Or too much of the wrong kind?
Good teachers learn very early in the relationship with their kids how to determine progress (or the lack of) through methods other than tests, like talking to them and observing their behavior.
But the database doesn’t seem to have any place for that information, and it certainly doesn’t carry as much weight as the test-generated numbers beyond the classroom walls.
Then there’s the matter of time. Â All those extra tests are replacing minutes and hours that could, and should, be used for activities that involve actual learning.
Beyond all that, one of my main concerns about swimming in all these tables and charts and graphs is that we start losing sight of where it came from. Â The fact that, at the most basic level, those statistics represent kids.
Kids who are constantly growing and changing and who, like most of us, have their strengths and weaknesses, their good days and bad.
It seems the farther the data gets from the classroom, the less the people doing the analyzing seem to recognize that connection.
At some point on the way up the hierarchy, kids cease being real people and morph into simple blocks of statistics on which to build headlines and political positions.