The Leadership Squash

Thanks to @science_goddess for pointing me to this NPR piece on The Myth of the Superstar Superintendent in which they report on a study showing no correlation between student achievement and who is leading their school district.

However, I think their conclusion is far too simple. It’s foolish to say that the leaders of a school system don’t matter. As in any other field, it all depends on their leadership style.

“A good superintendent empowers leading visionary principals and teacher leaders at the school,” she [education writer and author Dana Goldstein] says. But what actually happens too often is that superintendents “squash interesting ideas, so you’d have principals afraid to try something new, afraid to try something innovative.”

Unfortunately, with the many layers of super-level leadership we have here in the overly-large school district, there’s a lot of that squashing going on.

What Does Your “Research” Really Say?

An essay by an English teacher posted in the wonderful Post blog The Answer Sheet 1 offers Seven things teachers are sick of hearing from school reformers.

It’s all good, worth your time to read and pass along, and she probably could have added eight or ten more. But this is one that really stands out.

4. Don’t tell us “The research says…” unless you’re willing to talk about what it really says.

It’s not that we don’t care about research, but that most often when research is mentioned in a school context, it is used to end legitimate conversation rather than to begin it, as a cudgel to silence us rather than an opening to engage us constructively. Very often when confronted with a “research says” claim that I find dubious or irrelevant, I ask for a citation and get a blank or vaguely menacing stare, or some invented claim about the demands of the Common Core, or a single name, “Marzano,” as though he completed all instructional research.

Research on children and learning is difficult to do right and the best you can say about almost studies in this area is that they are incomplete. However, at the very least those education “experts” pontificating on research should be required to read past the executive summary.

Oh, and I’m one more teacher who’s tired of “Marzano” being cited as the solution to everything.

  1. And there isn’t much wonderful in the Washington Post these days, on paper or the web.

Everybody’s Wild About Data

That’s especially true in the education business, which if you look closely, probably produces the most unreliable data you could possibly get.

But that doesn’t stop politicians, media, and “experts” from latching onto polls, studies, and research and using them to sell their pet reform plans. Often without question and based only on a read of the executive summary.

For the rest of us who want to know a little more about the data before accepting the headlines, The Atlantic offers some advice on How to Read Education Data Without Jumping to Conclusions.

As readers and writers look for solutions to educational woes, here are some questions that can help lead to more informed decisions.

1. Does the study prove the right point? It’s remarkable how often far-reaching education policy is shaped by studies that don’t really prove the benefit of the policy being implemented. 

2. Could the finding be a fluke? Small studies are notoriously fluky, and should be read skeptically.

3. Does the study have enough scale and power? …the million-dollar question is whether the study was capable of detecting a difference in the first place.

4. Is it causation, or just correlation? Correlation … does not indicate causation. In fact, it often does not.

The fact that too many people don’t know the difference between those two concepts in number 4 is a direct indictment of the K12 math curriculum. Doesn’t say much for those statistics courses that many educators are required to take during their advanced degree programs.

Anyway, these are all good recommendations. I would only suggest adding one more: Who paid for this particular research?

Just because a particular organization (like the Gates Foundation) funds a study that ends up supporting their existing point of view (as has happened more than once), doesn’t mean the research is flawed.

Only that it should require even closer scrutiny before using it to make educational policy and spending millions of dollars to implement it.

Potential for a Bi-Literate Brain

The web was born around 25 years ago, and I’ll bet that not long after that researchers began studying how being online changes the human mind. With reports that often included dire warnings.

This recent study is no exception.

To cognitive neuroscientists, Handscombe’s experience is the subject of great fascination and growing alarm. Humans, they warn, seem to be developing digital brains with new circuits for skimming through the torrent of information online. This alternative way of reading is competing with traditional deep reading circuitry developed over several millennia.

Is that something to be alarmed about? Is a “digital brain”, one that has adapted to manage a “torrent” of online information, really all that bad?

I can accept that the process of reading material in analog form is very different from reading a hyperlinked document on a screen. But is one format better than the other? If the “brain is constantly adapting” can’t it learn techniques to do both well?

Our history seems to indicate we can.

The brain was not designed for reading. There are no genes for reading like there are for language or vision. But spurred by the emergence of Egyptian hieroglyphics, the Phoenician alphabet, Chinese paper and, finally, the Gutenberg press, the brain has adapted to read.

I’ll bet those first examples of written language were not Moby Dick-length novels. Probably more like Twitter-length messages. In fact, it’s only been within the past few centuries that a majority of people in western cultures could even read at all. Prior to that printed materials of any length were usually only consumed by certain educated classes.

Anyway, I’m not sure the work of one researcher with a forthcoming book that “will look at what the digital world is doing to the brain” is reason to panic. In fact, the writer of this article ends with exactly the right approach.

Researchers say that the differences between text and screen reading should be studied more thoroughly and that the differences should be dealt with in education, particularly with school-aged children. There are advantages to both ways of reading. There is potential for a bi-literate brain.

Blame the Technology… Again

I really hate when popular media report research findings with headlines like this: “Students’ use of laptops in class found to lower grades”. Too many people won’t get past that blanket statement, never questioning the kind of superficial research behind it.

For the study, published earlier this year in the journal Computers & Education, research subjects in two experiments were asked to attend a university-level lecture and then complete a multiple-choice quiz based on what they learned.

The results were pretty much what you might expect.

Those students using laptops to take notes who were also asked to “complete a series of unrelated tasks on their computers when they felt they could spare some time”, such as search for information, did worse on the quiz than those who didn’t do any of that stuff.

In a second part of the experiment, those who took paper and pencil notes while surrounded by other students working on computers did even worse.

Of course, the implicit assumption here is that lectures are an important vehicle for learning, not to mention that a multiple-choice quiz is a valid assessment of that learning. And that use of the technology was the primary factor in the low scores.

I wonder how the results would have differed if the researchers had divided the subjects into two groups, those who were interested in the subject matter, and those who could care less and only were participating for the twenty bucks.

Ok, without any kind of research to back it, I’m going hypothesize that the single biggest factor in student learning is some kind of connection to the material. With or without a laptop.

Read Past the Headline

Between Twitter, my RSS feeds, and email, at least a dozen people in the past few days have pointed me to an item with the breathless headline “1-Year Educational iPad Pilot Complete: Students Writing Markedly Improved“.

Very exciting. We certainly could use more solid research on the effectiveness of technology like the iPad for instruction. 

However, this ain’t it.

The post is nothing more than the complete text of a press release.

A narrative of how iPads were used in the classes of one teacher at a relatively exclusive private school for boys.

With very few details to validate the “markedly improved” metric.

And written by the company that publishes the $10 app used in the “study”.

I wonder how often this PR piece was passed around in an effort to justify iPad purchases without reading past the headline or questioning the source.

Learning From Everyone

Mimi Ito’s specialty is “researching how young people are learning differently because of the abundance of knowledge and social connections in today’s networked world”.

She has heard the calls, from the president and others, for colleges to put more of their courses online and says that’s far from all we should be doing.

While I would be the last one to argue against getting more good educational material online and accessible, I do question whether our focus should be exclusively on classroom instruction.

Young people are desperate for learning that is relevant and part of the fabric of their social lives, where they are making choices about how, when, and what to learn, without it all being mapped for them in advance. Learning on the Internet is about posting a burning question on a forum like Quora or Stack Exchange, searching for a how to video on YouTube or Vimeo, or browsing a site like Instructables, Skillshare, and Mentormob for a new project to pick up. It’s not just professors who have something to share, but everyone who has knowledge and skills.

So, what are the implications for what we do in K12, especially high school? Should our focus continue to be exclusively on classroom instruction? Or the online clones of a traditional classroom found in most “virtual” schools?

Unnecessary Evil

Alfie Kohn, one of the smartest voices in the education reform discussion, has an interesting article about new research into the value of homework, one that includes a reminder of the important of reading studies carefully “rather than relying on summaries by journalists or even by the researchers themselves”.

Kohn, who literally wrote the book on the subject, the wonderful The Homework Myth: Why Our Kids Get Too Much of a Bad Thing, starts by noting the significant lack of support for the instructional value of homework found in previous studies.

First, no research has ever found a benefit to assigning homework (of any kind or in any amount) in elementary school.

Second, even at the high school level, the research supporting homework hasn’t been particularly persuasive.

Third, when homework is related to test scores, the connection tends to be strongest — or, actually, least tenuous — with math.

This latest study focuses on math and science homework in high school, an area that Kohn says is one “where you’d be most likely to find a positive effect if one was there to be found”.

And the result of this fine-tuned investigation? There was no relationship whatsoever between time spent on homework and course grade, and “no substantive difference in grades between students who complete homework and those who do not.”

This result clearly caught the researchers off-guard. Frankly, it surprised me, too. When you measure “achievement” in terms of grades, you expect to see a positive result — not because homework is academically beneficial but because the same teacher who gives the assignments evaluates the students who complete them, and the final grade is often based at least partly on whether, and to what extent, students did the homework. Even if homework were a complete waste of time, how could it not be positively related to course grades?

Beyond the value of homework, or the lack thereof, Kohn’s discussion of the research process itself, and especially how the researchers “reframe these results to minimize the stunning implications”, is well worth your time to read the whole article, footnotes and all.

Blame the Technology. Or the Students.

From the New York Times

There is a widespread belief among teachers that students’ constant use of digital technology is hampering their attention spans and ability to persevere in the face of challenging tasks, according to two surveys of teachers being released on Thursday.

An English teacher quoted in the story complained “I’m an entertainer. I have to do a song and dance to capture their attention.” and later asked “What’s going to happen when they don’t have constant entertainment?”.

However, is technology the problem? Or what it’s “doing to” kids?

Although I can sympathize to some degree, the English teacher’s statement and the opinions of a majority in the survey are a little disturbing. The whole foundation on which these studies are based* assumes that whatever is being done in the classroom is right and the kids are “wrong” in some way, due, of course, to their “constant use of digital technology”.

I wonder if anyone – researchers or subjects – seriously questioned whether what the students were asked to learn, the assignments they were given, the instructional methods might, just might, be a major factor in their “shorter attention spans”.

Is technology to blame?

Or is a large part of the problem that our education system is largely unwilling to take a reflective look at itself, to reevaluate what today’s students need to know and how to best help them learn it?

*Admittedly I haven’t read either report so it’s possible I’m completely wrong. Wouldn’t be the first time.

No Evidence for Learning Styles

Over the past couple of decades that I’ve been involved in educational professional development, one of the key concepts pushed has related to learning styles. This is the idea that some kids are verbal learners while others are visual types and still others kinesthetic and that we need to adjust our instruction specifically to reach each of those groups.

However, on a short segment from NPR’s Morning Edition today, several psychologists have looked at the research behind the theory of learning styles and found no basis for saying that teachers should tailor their instruction to different kinds of learners.

When he reviewed studies of learning styles, he found no scientific evidence backing up the idea. “We have not found evidence from a randomized control trial supporting any of these,” he says, “and until such evidence exists, we don’t recommend that they be used.”

While the research may or may not be valid (always read any research, especially involving humans, with a large dose of skepticism), I still believe that both kids and adults have styles of learning they prefer and are most comfortable with. It doesn’t mean they can’t learn any other way, just that they would rather not, given the option.

However, that doesn’t mean we should specifically adapt instruction for each group of learners. Instead we should be teaching our students how to adapt their learning abilities to the different situations that they’re likely to encounter throughout their lives. Certainly how to read a book, but also how to consume and understand other types of media, as well as how to create them.

Late in the piece, one speaker notes that, while there’s no research to back up the concept of learning styles, there is plenty of evidence showing that using a variety of approaches and regularly changing instructional styles, does benefit all students.

Which only makes sense since I learned early in my career that teaching the same way all the time is boring, both for the kids and for me.

Going Against The Rules

Although it was published three years ago, I’m finally getting around to reading Brain Rules by John Medina.

A combination of the book being highly recommended by several friends, Amazon selling the Kindle edition for only three bucks, and Medina being the opening keynote speaker at the ISTE conference later this month.

Anyway, I’m still in the very early parts of the book but I thought this quote from the introduction was worth passing along.

What do these studies [brain research] show, viewed as a whole? Mostly this: If you wanted to create an education environment that was directly opposed to what the brain was good at doing, you probably would design something like a classroom. If you wanted to create a business environment that was directly opposed to what the brain was good at doing, you probably would design something like a cubicle. And if you wanted to change things, you might have to tear down both and start over. [emphasis mine]

Certainly something to remember as I read on.

Do We Have Enough Evidence Yet?

According to a new study, “The tests that are typically used to measure performance in education fall short of providing a complete measure of desired educational outcomes in many ways.”.

Beyond being ineffective at measuring student learning, these standardized testing programs (normally administered by states) have done little or nothing to improve scores on the national and international evaluations, the holy grail of education reformers.

The panelists — who include experts in assessment, education law and the sciences — examined over the past decade 15 incentive programs, which are designed to link rewards or sanctions for schools, students and teachers to students’ test results. The programs studied included high-school exit exams and those that give teachers incentives (such as bonus pay) for improved test scores.

The panel studied the effects of incentives, not by tracking changes in scores on high-stakes tests connected to incentive programs, but by looking at the results of “low-stakes” tests, such as the well-regarded National Assessment of Educational Progress, which aren’t linked to the incentives and are taken by the same cohorts of students.

The researchers concluded not only that incentive programs have not raised student achievement in the United States to the level achieved in the highest-performing countries but also that incentives/sanctions can give a false view of exactly how well students are doing. (The U.S. reform movement doesn’t follow the same principles that have been adopted by the other countries policymakers often cite.).

No study is conclusive proof of anything, especially when it comes to matters of teaching and learning.

However, this is just one of a growing body of research showing that our all-testing-all-the-time approach to American education, along with charters, value-add teacher evaluation, merit pay, and other favorite “reforms” of politicians and billionaires, are ineffective at improving student learning, and a major waste of money and other resources.

So, are we ready yet to work on creating a genuinely new approach to public education, instead of ignoring the evidence and recycling old ideas that all the smart, rich people are sure will work?

Still Not Finding Merit in These Pay Plans

Last fall, the results of the “first scientifically rigorous review of merit pay in the United States” were released and the researchers found the financial incentives “produced no discernible difference in academic performance” (aka test scores).

Now a new, larger study, conducted by a Harvard economist who is responsible for designing some of these schemes, “examines the effects of pay-for-performance in the New York City public schools”.

And what did he find*?

Providing incentives to teachers based on school’s performance on metrics involving student achievement, improvement, and the learning environment did not increase student achievement in any statistically meaningful way. If anything, student achievement declined. [my emphasis]

The impact of teacher incentives on student attendance, behavioral incidences, and alternative achievement outcomes such as predictive state assessments, course grades, Regents exam scores, and high school graduation rates are all negligible. Furthermore, we find no evidence that teacher incentives affect teacher behavior, measured by retention in district or in school, number of personal absences, and teacher responses to the learning environment survey, which partly determined whether a school received the performance bonus.

When it comes to research, especially dealing with human behavior, the results of any one study should not be taken as definitive proof one way or another on the issue being studied.

Two showing the exact same results, however, should at least cause thoughtful people to question their beliefs and assumptions.

Now we just need to find some thoughtful people in leadership positions at the DOE and in Congress. States like Florida could use a few as well.

*Link to pdf of the study results.