One of the most essential questions education research needs to answer is what makes some schools more effective than others, when controlling for student characteristics. Research tells us that inputs like money, class size (for most students), and even teacher credentials (like certification, years of experience) have no consistent relationship with achievement, and yet our ed policy is designed around these very things. So what’s a policy wonk to do?

A brand new, fascinating study (attached) from the Harvard duo Will Dobbie and Roland Fryer (yes, that Fryer from Freakonomics fame) claims to identify in-school practices of effective schools which explain 50% of the variance in student achievement results. (See also the online appendix with more details about the data set.)

Let me try to explain the above sentence for the not-so-statistically-inclined ed wonks. If you still have questions about what the study means (especially after you read it), let me know.

One definition of school “effectiveness” is results on standardized achievement tests: the better the test scores, the more effective. The difference in scores between high-performing and low-performing schools is called the variance by researchers. Dobbie and Fryer claim to have identified practices that together explain at least half of the difference in results between high-performing and low-performing schools, when the researchers control for student demographic variables like race, class, gender, and prior achievement.

These five practices are

  1. frequent teacher feedback, (pp. 7-8 from study, pp. 5-6 from online appendix)
  2. data-driven instruction, (p. 8 from study, pp. 6-7 from online appendix)
  3. high-dosage tutoring, (pp. 8-9 from study, pp. 9-10 from online appendix)
  4. increased instructional time, (p. 9 from study, and pp. 8-9 from online appendix) and
  5. a relentless focus on academic achievement (pp. 9-10 from study, and pp. 10-12 from online appendix).

*Page numbers indicate where to find information in the study about how they defined these things.

The researchers also looked at whether each of these practices had an effect individually as opposed to only as part of this “index” of five practices. The first four of them did, which indicates that each of those four policies contributes something “real” and “distinct” to the index result as opposed to needing the combination to get any noticeable effect. The fact that #5 is left out could be that a “relentless focus on academic achievement” alone doesn’t really matter if you don’t also have “frequent teacher feedback,” for example.

A few other important findings of the study:

  • Researchers check to see if their five practices are better at explaining achievement in their dataset v. “traditional inputs.” They find that their five practices are much better at explaining results.
  • Researchers also check three other “models” of schooling–“whole child” approach, teacher quality approach, and “No Excuses” approach–to see if these different approaches explain the results or if any results from the approaches can really be attributed to the existence of the five practices. They find that none of these models explain the variance like their five practices. The “whole child” approach appears to have a slight negative impact when examined alone and when combines with the five practices has no statistical impact at all. Both the teacher quality approach and the “No Excuses” approach have a positive impact but when combined with the five practices, the impact of these two models is significantly diminished (teacher quality) or disappears (No Excuses). This may indicate that the “No Excuses” school philosophy is not driving the success, rather the researchers’ five practices that each of these schools also use is what is responsible for the result.

It is important to note that this is not an experimental study. In other words, these results should not be described as these five things CAUSING achievement to rise. Rather, these five things are observable in schools that do well and explain the results. But with any observational study, there’s always the risk that there’s something researchers did not account for that is actually causing the results that we see and those things are closely correlated to these five practices. Roland Fryer and Will Dobbie now have to go introduce these practices in schools as part of a randomized experiment to be able to say that they CAUSE achievement to rise.

I hope you found this as interesting as I did. Mississippi First has been trying to do a study in Mississippi that has a very similar methodology of using site visits and interviews to detail actual in-school practices of effective schools in Mississippi as a way of better understanding what our ed policy for turning around low-performing schools should look like. Special shout-out to Erika Berry who helped us think about this study this summer and was kind enough to use her Vandy library access to get me a PDF copy of this study!


One Comment on “For Ed Wonks: New Study on the Characteristics of High Performing Schools

  1. Hi there! I’m very interested in the “Whole Child” model so I was interested in the following finding:
    “The “whole child” approach appears to have a slight negative impact when examined alone and when combines with the five practices has no statistical impact at all.”

    Is this saying that schools which utilize the “Whole Child” approach and the five practices produce no increased impact in achievement- as in the “Whole Child” approach negates the impact potential of the five practices?

Leave a Comment

Your email address will not be published. Required fields are marked *