That’s especially true in the education business, which if you look closely, probably produces the most unreliable data you could possibly get.

But that doesn’t stop politicians, media, and “experts” from latching onto polls, studies, and research and using them to sell their pet reform plans. Often without question and based only on a read of the executive summary.

For the rest of us who want to know a little more about the data before accepting the headlines, The Atlantic offers some advice on How to Read Education Data Without Jumping to Conclusions.

As readers and writers look for solutions to educational woes, here are some questions that can help lead to more informed decisions.

1. Does the study prove the right point? It’s remarkable how often far-reaching education policy is shaped by studies that don’t really prove the benefit of the policy being implemented. 

2. Could the finding be a fluke? Small studies are notoriously fluky, and should be read skeptically.

3. Does the study have enough scale and power? …the million-dollar question is whether the study was capable of detecting a difference in the first place.

4. Is it causation, or just correlation? Correlation … does not indicate causation. In fact, it often does not.

The fact that too many people don’t know the difference between those two concepts in number 4 is a direct indictment of the K12 math curriculum. Doesn’t say much for those statistics courses that many educators are required to take during their advanced degree programs.

Anyway, these are all good recommendations. I would only suggest adding one more: Who paid for this particular research?

Just because a particular organization (like the Gates Foundation) funds a study that ends up supporting their existing point of view (as has happened more than once), doesn’t mean the research is flawed.

Only that it should require even closer scrutiny before using it to make educational policy and spending millions of dollars to implement it.