The Proof is in the Pudding: But what if the pudding is rancid?

Evidence-based teaching, evidence-based decisions, instruction and school improvement –terms driving the school reform movement as we seek to identify how to make our nation’s students competitive with the world.  Standardized curricula and assessments purport to be the silver bullets that will help students achieve meaningful outcomes and tout data gathered from research as evidence of their effectiveness.

It is no doubt that teaching strategies and educational theories with little evidence to support them, and even those that have been disputed as invalid, continue to be passed along through tradition or myths. New teachers often rely on mentors and more experienced colleagues to find what works with students, perpetuating methods that may or may not lead to student improvement.  Strategies promising research-backed evidence of success, therefor, can look very attractive, especially to those outside the education system who may lack the knowledge and expertise for running a successful classroom or school.

But what if this evidence is based on bad science?

CaptureChallenges of prescribed curricula: Let’s start with the assumption that any one practice or set of practices will be adequate to meet the needs of diverse student populations in this country.  These practices can lead to the failure of students who may need alternate or multiple approaches. Teachers who are trained to cover a curriculum rather than find the best ways to reach individual students may adapt a kind of a learned helplessness as they are discouraged from finding innovative and creative solutions.

What are we measuring? Educational outcomes aren’t always as clear-cut as curriculum developers may have us believe.  Scientific research involves randomization and the isolation of variables that might confound results.  How can researchers separate issues of poverty, health, welfare, family structures and other social influencers from the impact of the tested ‘treatment’ in such a way that any of the studies could be replicated?  Current measurements seem to be missing data on how students persist when challenged and what strategies students use to try to gain deeper understanding.  This information would provide parents, students and teachers with a much deeper knowledge of how children learn.

Who are we measuring? When evaluating a researched curriculum, it is important to know where it was tested.  Often curricula are promoted for use with groups on which they have never been tested. Phonics curricula aimed at struggling readers may then be used in classrooms with children who learn quickly, with no evidence to support that this method is also beneficial to them or may not actually discourage their learning.  Affluent schools with fewer social problems, if used as a test site, may skew results as their students may already be able to perform better on measurement tests created out of a pedagogy relevant to their culture. School culture may also dictate how well certain methods can be delivered, results that may not translate to poorly funded or managed schools.

How are we measuring? The external validity (the extent to which you can generalize a study’s results to other situations) is another critical feature of sound research.  Often the teaching used to test a curriculum is done by researchers or research educators and may not resemble the kind of teaching that goes on in actual classrooms or school buildings.  Additionally, when classroom teachers who are inadequately prepared to deliver the a new curriculum are part of the test group, this validity is again threatened.

The pitfalls of basing teaching on data: Since the explosion of standardized testing, educators may have access to a plethora of data, so much so that it can be impossible to act on all of it in a thoughtful and analytical way.  Data collection and processing requires a tremendous amount of expertise and resources that schools and local education agencies simply don’t have, leaving them in the position of accepting presented results as ‘facts’ and answers devoid of critical analysis. And, in the face of so much data, it is possible that other critical data about school problems is ignored if there is not an offered ‘solution’ tied to it, leaving the problem unaddressed and unstudied.

Teachers are the experts in what makes learning work: Data-driven decisions about instruction doesn’t have to leave teachers behind.  In fact, this movement may provide teachers with the opportunity to regain control of their profession from those who know very little about the complexities of the student/teacher relationship. Professional development focused on how to evaluate education research, and the creation of learning communities within schools can offer teachers the opportunity and knowledge to discuss emerging research and modify accepted techniques in ways that work best for their students.

Great teachers combine personal experiences and intuition with thoughtful and well-researched evidence and, if given the proper support, can make data work for schools and for student success.

Other Resources:

http://www.alfiekohn.org/article/one-size-fits-education-doesnt-work/

http://media.education.gov.uk/assets/files/pdf/b/ben%20goldacre%20paper.pdf

http://www.readingonline.org/articles/art_index.asp?HREF=/articles/rt/9-03_column/