Skip to content
Home » Resources & Successes » The Reading Crisis Lawsuit: When Evidence-Based Research Takes a Back Seat

The Reading Crisis Lawsuit: When Evidence-Based Research Takes a Back Seat

The recent lawsuit filed by Massachusetts families against the state and reading curriculum publishers highlights a critical issue in education: the disconnect between evidence-based reading instruction and actual classroom practice. The families point out multiple stakeholders involved in how their children were left behind (see one article on the lawsuit below). But there’s a deeper problem at play – one that stems from how we evaluate and implement educational research to make decisions.

How MA Evaluates Curriculum

When you go to the Massachusetts literacy curriculum review page, they do not present any evidence that lines up with the 2019 Every Student Succeeds Act laws to evaluate educational product research. Instead, they show an hourglass icon and cite a 2018 policy brief that explains that educational research needed to make decisions about curriculum is hard to gather (image from a product review below). The most commonly used products may not have any research on them; see this list of MA curricula and district adoption rates.

Every curriculum reviewed in CURATE has this rationale for the “Impact on Learning” section.

While that was true in 2018, there have been dozens of research studies published on literacy products since 2021 that could be evaluated and reported to districts. Over 100 products have been reviewed by Johns Hopkin’s Evidence for ESSA, with citations for each study for each product. The Arizona Department of Education has a publically available list of every reading product and links to their studies on their MOWR website, and it is now much more common for products to publish their research on ERIC.Ed.gov.

Why Are There So Few Longitudinal Studies

The 2018 brief also points out that we need more longitudinal studies to see how programs used in grades K-2 impact students’ reading abilities in grades 3-5. Part of the problem with our research incentive structure is that research eligible for review by What Works Clearinghouse (WWC) and Johns Hopkins’ Evidence for ESSA doesn’t accept longitudinal research. They require that the treatment (the program being studied) be withheld from at least a portion of the students in the district. A school leader will agree to do this for a short time to study the program, but if it’s working they want to roll it out to everyone right away. Currently, longitudinal research is considered ESSA Tier 3, Promising, while short-term treatment/comparison or control studies are Tiers 1 and 2, Strong and Moderate. We need to create a value proposition for longitudinal research in the ESSA evidence system to incentivize publishers to partner with researchers to produce these reports.

Another misalignment that has real consequences is that the impact of the intervention is not taken into account when providing the rating. Take the case of Fountas and Pinell’s Leveled Literacy Intervention (LLI). Despite receiving a strong rating from WWC and Johns Hopkins’ Evidence for ESSA, the study is from 2009 and only covers the intervention version of their program, not the Tier 1 core instruction used by most schools. The WWC review shows a less than 1.5-point difference between the treatment and control groups in one study. It’s unfortunately that the Strong rating has nothing to do with the size of the impact, but only the design. In fact, there are studies with Promising ratings with 0.00 effect size as their impact. This kind of inconsistency in evaluation standards leaves educators, districts, and families in limbo, unsure of what truly constitutes “evidence-based” instruction.

We need a three-pronged solution:

  1. Standardized evaluation criteria that recognize various forms of research, including longitudinal studies. International Centre for EdTech Impact; WiKIT and International Certification of Evidence of Impact in Education (ICEIE) are working on this!
  2. Better alignment between state requirements and major research evaluators’ standards
  3. A commitment to improving access to understanding the research on schools reading instruction methods, ensuring they are backed by consistent research validation

In the meantime, there are numerous organizations that review research and products. They have different purposes, so reviewing product research is not a one-stop shop. The Reading League EdSurge & ISTE ‘s EdTech Index, Digital Promise certifications, and International Certification of Evidence of Impact in Education (ICEIE) has a global database.

The current situation in Massachusetts serves as a warning: When we ignore or dismiss research-based evidence in education, our students pay the price. As these families’ lawsuits show, the cost of inaction by states to require research and by companies to conduct research is too high.

The path forward requires acknowledging that while no research is perfect, dismissing all product research creates a dangerous vacuum in educational decision-making. We must work to bridge the gap between research, policy, and practice – our students’ literacy depends on it.