This year it’s a major topic of discussion here in theÂ overly-large school district: how to acquire it, how to manage it, how analyze it. Â A drive for generating data that’s veering very close to being an obsession.
We’ve become very good at generating data. Â At least certain types.
In the schools I’ve visited over the past few months, I’ve seen students in an unhealthy number of classrooms involved with using pencils or trackpads to fill in the blanks on some kind of data-gathering vehicle.
Around here, standardized tests are not just reserved for May anymore. Â We now have an “assessment resource tool”, a big database of questions that year-round spits out tests, sucks in student responses, and lines up all that data ready for… what?
We also spend a lot of time on managing all that data, hours merging the locally generated stats with those from the many state and national exams into home-made spreadsheets and databases, to be sorted and queried and reported out in multiple variations.
Analyzing all those results, and determining what we should do about them, is an even more difficult problem.
So, what do all those numbers mean? Do they really tell us what students know about a particular subject (mostly reading and math, of course)?
I fully understand the need to regularly assess student learning, but is it possible to have too much data? Or too much of the wrong kind?
Good teachers learn very early in the relationship with their kids how to determine progress (or the lack of) through methods other than tests, like talking to them and observing their behavior.
But the database doesn’t seem to have any place for that information, and it certainly doesn’t carry as much weight as the test-generated numbers beyond the classroom walls.
Then there’s the matter of time. Â All those extra tests are replacing minutes and hours that could, and should, be used for activities that involve actual learning.
Beyond all that, one of my main concerns about swimming in all these tables and charts and graphs is that we start losing sight of where it came from. Â The fact that, at the most basic level, those statistics represent kids.
Kids who are constantly growing and changing and who, like most of us, have their strengths and weaknesses, their good days and bad.
It seems the farther the data gets from the classroom, the less the people doing the analyzing seem to recognize that connection.
At some point on the way up the hierarchy, kids cease being real people and morph into simple blocks of statistics on which to build headlines and political positions.
Photo: information overload byÂ verbeeldingskr8 posted on Flickr and used under a Creative Commons license.
There is also a huge hole where qualitative data should be. The conversations we have with students…the observations we make in class…these need to have equal weight (at least) with the quantitative forms.
I’m right there with The Science Goddess. We’re redefining data to mean numbers, seemingly without putting up a fight. We have an obligation to push back and demand qualitative measures, evidence that the quantitative data are derived from quality assessments, and access to a variety of data management tools that enable educators to work with data in ways that go beyond charts and graphs.
Today was our second (and final) day of Semester One Exams. While I did not do an actual poll, I feel fairly confidant that I was the only teacher that gave an Essay-only exam. I know several teachers included essays as part of their exams, but EVERYONE I spoke with encouraged using Scantrons as much as possible. I know that there are certain things that you can use multiple choice questions for; I also know many students prefer them because they are “easier.” However, I am an English teacher and the main skill we worked on was reading something and then writing a coherent, persuasive essay concerning the text. The only way to discern this skill is through an essay–it seems to be these skills that are being lost by this number-data obsession.
All of this data is generated to measure “progress”. I’m sure that n00b teacher (above) has trouble quantifying “coherency” and “persuasiveness”. The real metric, (IMHO), is the increase (again, tough to measure) in coherency and persuasiveness, which should be measured on a per-student basis as well as a class / school / district.
After a few years of teaching, teachers have a intuitive feel for what the velocity of progress is for this particular class / year – that should be part of the equation.