What Does “Tech Savvy Student” Mean?

Ok, how many of you “older” folks out there are tired of the whole “digital natives” vs. “digital immigrants” concept? As someone who didn’t grow up using computers but who is now very comfortable with networks, social media, mobile devices and the rest, I know I am.

It’s hard, however, to convince some of my colleagues that kids are not born with some magic innate technological talent. I try to tell them that, more than anything, their students simply have more time to spend playing with various electronic devices and absorbing all the little tricks that seem like genius to those who don’t. Invoke the 10,000 hour rule.

However, knowing how to trick out a smart phone or understanding the complexities of Facebook doesn’t mean those students also have any clue how to use all that power to advance their learning. Or even the basics of common programs you use like PowerPoint.

A little research to back up this idea comes from a new study conducted by the Economic & Social Research Council (ESRC). The subjects were college students in the UK so maybe the findings don’t directly apply to the kids in US classrooms, but I doubt some of what they found is far off from those in our overly-large school district and other similar parts of the country.

  • 97.8 percent of students owned a mobile phone;
  • Just over three quarters — 77.4 percent — owned a laptop and 38.1 percent owned a desktop computer.
  • 70.1 percent felt their access to computers was sufficient to meet their computing needs.
  • The mobile phone was chosen by 83.2 percent as the device students would miss the most if it was taken away.
  • A small minority of students don’t use email or have access to mobile phones.

Students 20 years old or younger reported being more engaged in instant messaging, texting, social networks and downloading video media than students who were aged 25 years or more. Only 4.3 percent of those 20 or younger never used social networking sites, and for those 35 or older this rose to 78.5 percent.

In other words, they’re coming to your classroom understanding IM, texting, social networks, video downloads and carrying some powerful tools that involve reading, writing, collaboration.

So, what are we doing to leverage those communications skills and the devices in their pockets to improve their learning?

Sorry, I forgot it’s May. Testing season. No time to worry about all that learning stuff.

Does Teaching Make You Smarter?

Maybe not (whatever “smarter” means), but the practice of teaching certainly helps a person understand their subject much better.

Which is essentially the conclusion of a new study summarized on a recent episode of the 60-Second Science podcast.

Now a study finds that grad students who also teach show significant improvement in written research proposals, compared with grad students with no teaching requirement.

Differences in overall written quality among the students could not account for the results, because only specific skills among those analyzed showed improvement as a function of the teaching experience. So teaching may make STEM grad students better scientists. Not to mention better teachers.

I wonder if asking kids in grades below graduate school (like high school) to do more teaching and less being taught at, might make them smarter.

Or at least give them a better understanding of the math and science we want them to learn.

Do We Have Enough Evidence Yet?

According to a new study, “The tests that are typically used to measure performance in education fall short of providing a complete measure of desired educational outcomes in many ways.”.

Beyond being ineffective at measuring student learning, these standardized testing programs (normally administered by states) have done little or nothing to improve scores on the national and international evaluations, the holy grail of education reformers.

The panelists — who include experts in assessment, education law and the sciences — examined over the past decade 15 incentive programs, which are designed to link rewards or sanctions for schools, students and teachers to students’ test results. The programs studied included high-school exit exams and those that give teachers incentives (such as bonus pay) for improved test scores.

The panel studied the effects of incentives, not by tracking changes in scores on high-stakes tests connected to incentive programs, but by looking at the results of “low-stakes” tests, such as the well-regarded National Assessment of Educational Progress, which aren’t linked to the incentives and are taken by the same cohorts of students.

The researchers concluded not only that incentive programs have not raised student achievement in the United States to the level achieved in the highest-performing countries but also that incentives/sanctions can give a false view of exactly how well students are doing. (The U.S. reform movement doesn’t follow the same principles that have been adopted by the other countries policymakers often cite.).

No study is conclusive proof of anything, especially when it comes to matters of teaching and learning.

However, this is just one of a growing body of research showing that our all-testing-all-the-time approach to American education, along with charters, value-add teacher evaluation, merit pay, and other favorite “reforms” of politicians and billionaires, are ineffective at improving student learning, and a major waste of money and other resources.

So, are we ready yet to work on creating a genuinely new approach to public education, instead of ignoring the evidence and recycling old ideas that all the smart, rich people are sure will work?

Still Not Finding Merit in These Pay Plans

Last fall, the results of the “first scientifically rigorous review of merit pay in the United States” were released and the researchers found the financial incentives “produced no discernible difference in academic performance” (aka test scores).

Now a new, larger study, conducted by a Harvard economist who is responsible for designing some of these schemes, “examines the effects of pay-for-performance in the New York City public schools”.

And what did he find*?

Providing incentives to teachers based on school’s performance on metrics involving student achievement, improvement, and the learning environment did not increase student achievement in any statistically meaningful way. If anything, student achievement declined. [my emphasis]

The impact of teacher incentives on student attendance, behavioral incidences, and alternative achievement outcomes such as predictive state assessments, course grades, Regents exam scores, and high school graduation rates are all negligible. Furthermore, we find no evidence that teacher incentives affect teacher behavior, measured by retention in district or in school, number of personal absences, and teacher responses to the learning environment survey, which partly determined whether a school received the performance bonus.

When it comes to research, especially dealing with human behavior, the results of any one study should not be taken as definitive proof one way or another on the issue being studied.

Two showing the exact same results, however, should at least cause thoughtful people to question their beliefs and assumptions.

Now we just need to find some thoughtful people in leadership positions at the DOE and in Congress. States like Florida could use a few as well.


*Link to pdf of the study results.

 

Repeating History

Jay Mathews offers an extemely weak defense of NCLB and other recent “reform” efforts based on his interpretation of the recent annual report on American education from the Brookings Institution.

This is not exactly good news, but context is important. If we have managed to be the world’s most powerful country, politically, economically and militarily, for the last 47 years despite our less than impressive math and science scores, maybe that flaw is not as important as film documentaries and political party platforms claim. And if, after so many decades of being shown up by much of the rest of the developed world, we are improving, it might be time to be more supportive of what we already doing to fix our schools. [my emphasis, not his]

Crap!

The fact that test scores of American students are “flat to slightly up” on one international test and have “improved” since 1995 on another is hardly validation for converting most schools in this country into test prep academies.

Those small increases in scores on international assessments relative to other countries are more likely due to kids learning to be better test takers over the past fifteen years rather than from a better understanding of math and science.

The bottom line is that the efforts Mathews wants us to support are doing nothing to “fix” schools.

If “the data show that we have been mediocre all along, as far back as 1964″, and we still organize our schools, instructional methods, and curriculum pretty much as we did in 1964 (which we do), maybe it’s finally time to consider changes to the fundamental structure of our education system.