Potential for a Bi-Literate Brain

The web was born around 25 years ago, and I’ll bet that not long after that researchers began studying how being online changes the human mind. With reports that often included dire warnings.

This recent study is no exception.

To cognitive neuroscientists, Handscombe’s experience is the subject of great fascination and growing alarm. Humans, they warn, seem to be developing digital brains with new circuits for skimming through the torrent of information online. This alternative way of reading is competing with traditional deep reading circuitry developed over several millennia.

Is that something to be alarmed about? Is a “digital brain”, one that has adapted to manage a “torrent” of online information, really all that bad?

I can accept that the process of reading material in analog form is very different from reading a hyperlinked document on a screen. But is one format better than the other? If the “brain is constantly adapting” can’t it learn techniques to do both well?

Our history seems to indicate we can.

The brain was not designed for reading. There are no genes for reading like there are for language or vision. But spurred by the emergence of Egyptian hieroglyphics, the Phoenician alphabet, Chinese paper and, finally, the Gutenberg press, the brain has adapted to read.

I’ll bet those first examples of written language were not Moby Dick-length novels. Probably more like Twitter-length messages. In fact, it’s only been within the past few centuries that a majority of people in western cultures could even read at all. Prior to that printed materials of any length were usually only consumed by certain educated classes.

Anyway, I’m not sure the work of one researcher with a forthcoming book that ”will look at what the digital world is doing to the brain” is reason to panic. In fact, the writer of this article ends with exactly the right approach.

Researchers say that the differences between text and screen reading should be studied more thoroughly and that the differences should be dealt with in education, particularly with school-aged children. There are advantages to both ways of reading. There is potential for a bi-literate brain.

Blame the Technology… Again

I really hate when popular media report research findings with headlines like this: “Students’ use of laptops in class found to lower grades”. Too many people won’t get past that blanket statement, never questioning the kind of superficial research behind it.

For the study, published earlier this year in the journal Computers & Education, research subjects in two experiments were asked to attend a university-level lecture and then complete a multiple-choice quiz based on what they learned.

The results were pretty much what you might expect.

Those students using laptops to take notes who were also asked to “complete a series of unrelated tasks on their computers when they felt they could spare some time”, such as search for information, did worse on the quiz than those who didn’t do any of that stuff.

In a second part of the experiment, those who took paper and pencil notes while surrounded by other students working on computers did even worse.

Of course, the implicit assumption here is that lectures are an important vehicle for learning, not to mention that a multiple-choice quiz is a valid assessment of that learning. And that use of the technology was the primary factor in the low scores.

I wonder how the results would have differed if the researchers had divided the subjects into two groups, those who were interested in the subject matter, and those who could care less and only were participating for the twenty bucks.

Ok, without any kind of research to back it, I’m going hypothesize that the single biggest factor in student learning is some kind of connection to the material. With or without a laptop.

Read Past the Headline

Between Twitter, my RSS feeds, and email, at least a dozen people in the past few days have pointed me to an item with the breathless headline “1-Year Educational iPad Pilot Complete: Students Writing Markedly Improved“.

Very exciting. We certainly could use more solid research on the effectiveness of technology like the iPad for instruction. 

However, this ain’t it.

The post is nothing more than the complete text of a press release.

A narrative of how iPads were used in the classes of one teacher at a relatively exclusive private school for boys.

With very few details to validate the “markedly improved” metric.

And written by the company that publishes the $10 app used in the “study”.

I wonder how often this PR piece was passed around in an effort to justify iPad purchases without reading past the headline or questioning the source.

Learning From Everyone

Mimi Ito’s specialty is “researching how young people are learning differently because of the abundance of knowledge and social connections in today’s networked world”.

She has heard the calls, from the president and others, for colleges to put more of their courses online and says that’s far from all we should be doing.

While I would be the last one to argue against getting more good educational material online and accessible, I do question whether our focus should be exclusively on classroom instruction.

Young people are desperate for learning that is relevant and part of the fabric of their social lives, where they are making choices about how, when, and what to learn, without it all being mapped for them in advance. Learning on the Internet is about posting a burning question on a forum like Quora or Stack Exchange, searching for a how to video on YouTube or Vimeo, or browsing a site like Instructables, Skillshare, and Mentormob for a new project to pick up. It’s not just professors who have something to share, but everyone who has knowledge and skills.

So, what are the implications for what we do in K12, especially high school? Should our focus continue to be exclusively on classroom instruction? Or the online clones of a traditional classroom found in most “virtual” schools?

Unnecessary Evil

Alfie Kohn, one of the smartest voices in the education reform discussion, has an interesting article about new research into the value of homework, one that includes a reminder of the important of reading studies carefully “rather than relying on summaries by journalists or even by the researchers themselves”.

Kohn, who literally wrote the book on the subject, the wonderful The Homework Myth: Why Our Kids Get Too Much of a Bad Thing, starts by noting the significant lack of support for the instructional value of homework found in previous studies.

First, no research has ever found a benefit to assigning homework (of any kind or in any amount) in elementary school.

Second, even at the high school level, the research supporting homework hasn’t been particularly persuasive.

Third, when homework is related to test scores, the connection tends to be strongest — or, actually, least tenuous — with math.

This latest study focuses on math and science homework in high school, an area that Kohn says is one “where you’d be most likely to find a positive effect if one was there to be found”.

And the result of this fine-tuned investigation? There was no relationship whatsoever between time spent on homework and course grade, and “no substantive difference in grades between students who complete homework and those who do not.”

This result clearly caught the researchers off-guard. Frankly, it surprised me, too. When you measure “achievement” in terms of grades, you expect to see a positive result — not because homework is academically beneficial but because the same teacher who gives the assignments evaluates the students who complete them, and the final grade is often based at least partly on whether, and to what extent, students did the homework. Even if homework were a complete waste of time, how could it not be positively related to course grades?

Beyond the value of homework, or the lack thereof, Kohn’s discussion of the research process itself, and especially how the researchers “reframe these results to minimize the stunning implications”, is well worth your time to read the whole article, footnotes and all.

Blame the Technology. Or the Students.

From the New York Times

There is a widespread belief among teachers that students’ constant use of digital technology is hampering their attention spans and ability to persevere in the face of challenging tasks, according to two surveys of teachers being released on Thursday.

An English teacher quoted in the story complained “I’m an entertainer. I have to do a song and dance to capture their attention.” and later asked “What’s going to happen when they don’t have constant entertainment?”.

However, is technology the problem? Or what it’s “doing to” kids?

Although I can sympathize to some degree, the English teacher’s statement and the opinions of a majority in the survey are a little disturbing. The whole foundation on which these studies are based* assumes that whatever is being done in the classroom is right and the kids are “wrong” in some way, due, of course, to their “constant use of digital technology”.

I wonder if anyone – researchers or subjects – seriously questioned whether what the students were asked to learn, the assignments they were given, the instructional methods might, just might, be a major factor in their “shorter attention spans”.

Is technology to blame?

Or is a large part of the problem that our education system is largely unwilling to take a reflective look at itself, to reevaluate what today’s students need to know and how to best help them learn it?


*Admittedly I haven’t read either report so it’s possible I’m completely wrong. Wouldn’t be the first time.

No Evidence for Learning Styles

Over the past couple of decades that I’ve been involved in educational professional development, one of the key concepts pushed has related to learning styles. This is the idea that some kids are verbal learners while others are visual types and still others kinesthetic and that we need to adjust our instruction specifically to reach each of those groups.

However, on a short segment from NPR’s Morning Edition today, several psychologists have looked at the research behind the theory of learning styles and found no basis for saying that teachers should tailor their instruction to different kinds of learners.

When he reviewed studies of learning styles, he found no scientific evidence backing up the idea. “We have not found evidence from a randomized control trial supporting any of these,” he says, “and until such evidence exists, we don’t recommend that they be used.”

While the research may or may not be valid (always read any research, especially involving humans, with a large dose of skepticism), I still believe that both kids and adults have styles of learning they prefer and are most comfortable with. It doesn’t mean they can’t learn any other way, just that they would rather not, given the option.

However, that doesn’t mean we should specifically adapt instruction for each group of learners. Instead we should be teaching our students how to adapt their learning abilities to the different situations that they’re likely to encounter throughout their lives. Certainly how to read a book, but also how to consume and understand other types of media, as well as how to create them.

Late in the piece, one speaker notes that, while there’s no research to back up the concept of learning styles, there is plenty of evidence showing that using a variety of approaches and regularly changing instructional styles, does benefit all students.

Which only makes sense since I learned early in my career that teaching the same way all the time is boring, both for the kids and for me.

Going Against The Rules

Although it was published three years ago, I’m finally getting around to reading Brain Rules by John Medina.

A combination of the book being highly recommended by several friends, Amazon selling the Kindle edition for only three bucks, and Medina being the opening keynote speaker at the ISTE conference later this month.

Anyway, I’m still in the very early parts of the book but I thought this quote from the introduction was worth passing along.

What do these studies [brain research] show, viewed as a whole? Mostly this: If you wanted to create an education environment that was directly opposed to what the brain was good at doing, you probably would design something like a classroom. If you wanted to create a business environment that was directly opposed to what the brain was good at doing, you probably would design something like a cubicle. And if you wanted to change things, you might have to tear down both and start over. [emphasis mine]

Certainly something to remember as I read on.

Do We Have Enough Evidence Yet?

According to a new study, “The tests that are typically used to measure performance in education fall short of providing a complete measure of desired educational outcomes in many ways.”.

Beyond being ineffective at measuring student learning, these standardized testing programs (normally administered by states) have done little or nothing to improve scores on the national and international evaluations, the holy grail of education reformers.

The panelists — who include experts in assessment, education law and the sciences — examined over the past decade 15 incentive programs, which are designed to link rewards or sanctions for schools, students and teachers to students’ test results. The programs studied included high-school exit exams and those that give teachers incentives (such as bonus pay) for improved test scores.

The panel studied the effects of incentives, not by tracking changes in scores on high-stakes tests connected to incentive programs, but by looking at the results of “low-stakes” tests, such as the well-regarded National Assessment of Educational Progress, which aren’t linked to the incentives and are taken by the same cohorts of students.

The researchers concluded not only that incentive programs have not raised student achievement in the United States to the level achieved in the highest-performing countries but also that incentives/sanctions can give a false view of exactly how well students are doing. (The U.S. reform movement doesn’t follow the same principles that have been adopted by the other countries policymakers often cite.).

No study is conclusive proof of anything, especially when it comes to matters of teaching and learning.

However, this is just one of a growing body of research showing that our all-testing-all-the-time approach to American education, along with charters, value-add teacher evaluation, merit pay, and other favorite “reforms” of politicians and billionaires, are ineffective at improving student learning, and a major waste of money and other resources.

So, are we ready yet to work on creating a genuinely new approach to public education, instead of ignoring the evidence and recycling old ideas that all the smart, rich people are sure will work?

Still Not Finding Merit in These Pay Plans

Last fall, the results of the “first scientifically rigorous review of merit pay in the United States” were released and the researchers found the financial incentives “produced no discernible difference in academic performance” (aka test scores).

Now a new, larger study, conducted by a Harvard economist who is responsible for designing some of these schemes, “examines the effects of pay-for-performance in the New York City public schools”.

And what did he find*?

Providing incentives to teachers based on school’s performance on metrics involving student achievement, improvement, and the learning environment did not increase student achievement in any statistically meaningful way. If anything, student achievement declined. [my emphasis]

The impact of teacher incentives on student attendance, behavioral incidences, and alternative achievement outcomes such as predictive state assessments, course grades, Regents exam scores, and high school graduation rates are all negligible. Furthermore, we find no evidence that teacher incentives affect teacher behavior, measured by retention in district or in school, number of personal absences, and teacher responses to the learning environment survey, which partly determined whether a school received the performance bonus.

When it comes to research, especially dealing with human behavior, the results of any one study should not be taken as definitive proof one way or another on the issue being studied.

Two showing the exact same results, however, should at least cause thoughtful people to question their beliefs and assumptions.

Now we just need to find some thoughtful people in leadership positions at the DOE and in Congress. States like Florida could use a few as well.


*Link to pdf of the study results.

 

Still Not Much Value Added

Well, that didn’t take long.

Last month preliminary results from a study sponsored by the Gates Foundation were released that seemed to support the validity of using a “value added” system to evaluate teacher quality.

This month, someone else looking at the same data came to a very different conclusion.

But Economics Professor Jesse Rothstein at the University of California at Berkeley reviewed the Kane-Cantrell report and said that the analyses in it served to “undermine rather than validate” value-added-based measures of teacher evaluation.

“In other words,” he said in a statement, “teacher evaluations based on observed state test outcomes are only slightly better than coin tosses at identifying teachers whose students perform unusually well or badly on assessments of conceptual understanding. This result, underplayed in the MET report, reinforces a number of serious concerns that have been raised about the use of VAMs for teacher evaluations.”

“A teacher who focuses on important, demanding skills and knowledge that are not tested may be misidentified as ineffective, while a fairly weak teacher who narrows her focus to the state test may be erroneously praised as effective.”

So, who’s right? This being education research, it probably means neither and both.

However, there’s one question that never seems to be addressed in all this research about teacher quality.

Are constant waves of standardized, mostly multiple choice exams – accompanied by a narrow school focus on test prep – the best way to improve student learning?

I’m gonna vote no.

 

Not Much Value Added

Preliminary results from a “$45 million study of teacher effectiveness” finds that “growth in annual student test scores is a reliable sign of a good teacher”.

The central finding indicates that teachers with “value-added” ratings are able to replicate that feat in multiple classrooms and in multiple years.

Other findings suggest that teachers with high “value-added” ratings are able to help students understand math concepts or demonstrate reading comprehension through writing.

The final report isn’t due for about a year but this small glimpse offers two major reasons to seriously question the research.

One, is that the study was paid for by the Gates Foundation, “a prominent advocate of data-driven analysis”.

And two, the study rests on a foundation of state standardized tests that produce an extremely narrow view of student learning.

Which would be fine if we want kids who read at a minimal level and are adept at performing basic arithmetic algorithms.

Not Much Merit In These Pay Plans

In his Class Struggle column this week, Jay Mathews spotlights a study which concludes that districts don’t necessarily need to pay more in order to find and keep good teachers.

They just need to do a better job of selling the idea that teacher pay isn’t all that bad. Especially if you can get two teachers to marry. Or something like that.

A marketing campaign to show students that teachers made more than they thought they made “would induce a 7 percent increase in the number of top-third students entering teaching each year (or an equivalent nationally of 4,000 additional top third students above an estimated baseline of roughly 55,000 who enter today,)” the report [from McKinsey and Co., the giant management consulting firm] said.

Paid training increased the number going into teaching by 11 percent. A 20 percent performance bonus to the top-performing 10 percent of teachers would produce the same 11 percent gain in top-third students.

But providing training costs money, something that is usually the first thing to go when politicians start cutting school budgets.

And that idea of performance bonuses? It lost some credibility this week after the release of findings from the “first scientifically rigorous review of merit pay in the United States” showing that paying big money incentives “produced no discernible difference in academic performance”.

Which, of course, did nothing to slow the Secretary of Education from pushing the concept as a major part of the Race to the Top competition and, also this week, sending $442 million to a bunch of RTTT lottery-winning school districts so they can set up merit pay plans.

So, whatever happened to the idea of only paying for “researched-based” concepts that have been demonstrated to be effective in improving student learning?

You know, the concept that was one of the cornerstones of No Child Left Behind.

And which has been consistently ignored by politicians and education “experts” since long before the law’s inception, going back to W Bush’s “Texas miracle”, which also turned out to be based on gut feelings and no proof.