Picturing 2015

In web years, Flickr is relatively old service (opening in 2004), and has been far eclipsed by Facebook and Instagram in the number of pictures posted by users.1 However, I think it is still a great sharing community, especially for serious amateur photographers (like me) to showcase their work.

Each December, the editors at Flickr post a year in review highlighting the “photos, people and stories that captured our hearts, eyes and minds”, including their top 25 images as determined by the community. That post alone is a great collection and worth at least few minutes to gather some inspiration for next year.

Wedding Album

I don’t know if this is my “best” photo of the year, but it is one of my favorites.

Along with the pictures, Flickr also posts a bunch of interesting data about the cameras used by photographers to make those images. It probably won’t surprise anyone that various iPhones are the cameras of choice, growing steadily over the past five years. Seven different Apple devices are in the top 10 places (with the iPad coming in at 15 – unfortunately :-).

On the flip side of the rapid increase in smartphone pictures posted to Flickr, the charts show a steady drop in the popularity of dedicated point-and-shoot devices (down by 20% in five years), with a much smaller decline in the use of DSLRs.

Of course, this is just Flickr. It would be nice if Facebook and Instagram would release some similar stats on the devices used to post to their services. I’m willing to bet smartphones, especially iPhone models, are even more popular over there.

Because, in the end, it really is true that the best camera is the one you have with you. I just hope that in 2016 people will cut back on the selfies and do more exploring of their creative side.

More Statistical Crap

More on the statistical crap known as Value-Added Measure (VAM), the teacher evaluation system that’s supposed to incorporate the improvement in learning (aka standardized test scores) experienced by students in their classes.

This time some teachers and their local union are suing the Houston Independent School District over the way this process has been applied.

There’s not much new in their criticism of this “badly flawed method of evaluating teacher effectiveness”, one that’s already been challenged by people who actually understand the analysis of data, the American Statistical Association.

However, one highly inaccurate statement posted to the district website in support of using VAM jumped out at me:

Nothing matters more to student success than teachers.

While good teachers certainly can make a difference, there are many studies showing that a student’s socioeconomic status and parents are far more important factors of their success in school.

And, which is important in this case, there’s very little credible research2 supporting VAM as a teacher evaluation tool or as a means of improving student learning (again, aka standardized test scores).

Statistical Crap

Speaking of data (as in the previous post), the members of the American Statistical Association (ASA) probably know something about that topic. And they recently released a statement about the Value-Added model (VAM) of teacher evaluation, a very popular reform among those top-level education experts we all love and respect.2

In theory, VAM is supposed to measure how much value a teacher has added to the learning of their students, using standardized test scores and some complex mathematics that is supposed to exclude other “non-relevant” factors from the final numbers. Many districts and states (including Virginia, to a small degree) are using or planning to use some variation of VAM as a teacher evaluation tool and to determine continued employment, pay raises, tenure, even whether to close schools.

The ASA statement is, as you might expect, very academic in it’s assessment of VAM but they are still quite clear in their conclusions that this system is… how shall we put this? – statistical crap (my very non-academic interpretation of their 7 page report).

A few very relevant statements from the executive summary.

VAMs are complex statistical models, and high-level statistical expertise is needed to develop the models and interpret their results.

Expertise that is quite lacking in most schools, not to mention in pronouncements from supporters of this concept.2

Estimates from VAMs should always be accompanied by measures of precision and a discussion of the assumptions and possible limitations of the model. These limitations are particularly relevant if VAMs are used for high-stakes purposes.

Too many advocates of VAM consider the numbers as fact, not “estimates”, and are not open to any “possible limitations”.

And my favorites,

VAMs typically measure correlation, not causation: Effects — positive or negative — attributed to a teacher may actually be caused by other factors that are not captured in the model.

Under some conditions, VAM scores and rankings can change substantially when a different model or test is used, and a thorough analysis should be undertaken to evaluate the sensitivity of estimates to different models.

Most VAM studies find that teachers account for about 1% to 14% of the variability in test scores, and that the majority of opportunities for quality improvement are found in the system-level conditions. [my emphasis]

In other words, teacher quality, while important, is only one factor to consider in the very complex process of student learning, and the far-less-than-perfect method of assessing that learning, standardized test scores.

That “majority of opportunities for quality improvement” will only come from making systemic changes to educational policies at the district, state, and national levels.

Howl About These Numbers Instead

In the article that triggered the previous rant, both the writer and the subject, Bill Gates, make reference to the frequent howl of politicians and corporate types, that students in US schools have fallen far behind their counterparts in other countries. The line has been repeated so many times that it has become accepted as fact.

Except Alfie Kohn has some evidence-based arguments to use in response to those claims that are far more clichéd talking points than truth.

As always, his essay is very good, well worth saving for your next discussion with someone from the all-testing, all-the-time fan club.

However, this is probably the most important point Kohn makes about improving student achievement in the US, no matter how you define that term.

4. Rich American kids do fine; poor American kids don’t. It’s ridiculous to offer a summary statistic for all children at a given grade level in light of the enormous variation in scores within this country. To do so is roughly analogous to proposing an average pollution statistic for the United States that tells us the cleanliness of “American air.” Test scores are largely a function of socioeconomic status. Our wealthier students perform very well when compared to other countries; our poorer students do not. And we have a lot more poor children than do other industrialized nations.

More than 20% of American children are living in poverty, a rate that puts the US 34th out of 35 industrialized countries, the same ones frequently used in test score comparisons.

That ranking should be far more upsetting politicians and corporate types than the numbers generated from largely irrelevant multiple choice tests.

Racking Up The Big Numbers

This morning I tweeted in frustration about the numbers of online state standardized tests we give here in the overly-large school district.

Screen Shot 2012 05 30 at 1 30 08 PM

That comes from the daily report the testing managers in IT sent us today (and every school day this time of year), along with the fact that we had completed more than 220,000 online tests so far in this testing season.

The really depressing part of those statistics is that we have “only” 180,000 students in K12 and some of them don’t take the SOLs*. Or still take paper and pencil versions of the tests.

Now just imagine all the practice and non-SOL multiple choice tests taken during the rest of the school year.

Sad.


* SOL = Standards of Learning, Virginia’s infamously named state-wide standardized tests.