In today’s landscape of evidence-based studies assessing the effectiveness of educational products in schools, it becomes increasingly crucial to familiarize ourselves with the tools and resources used to evaluate the products meant to give students the lifelong skills they need in the classroom. Such evaluations uphold rigorous standards, aiming to provide students with the best possible education in each school.
Who runs Evidence for ESSA?
One organization at the forefront is Evidence for ESSA. This organization is part of John Hopkins University’s Center for Research and Reform in Education. It assesses educational products and assigns them certification across three tiers: Strong, Moderate, and Promising. This ESSA research review clearninghouse signifies that the product has undergone a rigorous study in a school-based setting. (Check out this article to learn more about other research clearinghouses and organizations.) These high standards equip educators with the necessary tools to evaluate products used in their classrooms by bridging the gap between the education and research realms.
Evidence for ESSA recently published its updated 2.0 standards for evaluating education products. It’s essential to stay informed about the recent updates in Evidence for ESSA’s standards and understand the reasons behind these changes. Being equipped with this knowledge will help you navigate the evolving landscape effectively.
More opportunities to have rigorous research reviewed
The change with the most considerable impact is the acceptance of post-hoc or retrospective research studies. Before, the researcher had to be hired and present before students used a product or program. Success stories shared with a company can be explored by “looking back” at learning and using a rigorous design and analysis process to provide rigorous evidence of effectiveness. These studies are cheaper and much faster than “real-time” learning studies.
Need a refresher about Evidence for ESSA or the ESSA Tiers of Evidence – click to review the articles.
“1.0 standards” signify the first version of their standards, while “2.0 standards” will signify the updated changes in the new standards.
New Study Designs Allowed
- 1.0 standards: Studies had to compare experimental and control groups using specific methods like random or quasi-experimental assignments.
- 2.0 standards: Studies still require comparison groups, but the acceptable methods now include regression discontinuity designs. Non-experimental comparisons and single-case design studies are no longer accepted.
Updated Time Frame
- 1.0 standards: Studies had to be conducted from 1990 (or 2000 for technology approaches) to the present.
- 2.0 standards: Studies must have been conducted from 2000 to the present.
Retrospective or Post-hoc Studies Allowed
- 1.0 standards: Post-hoc studies were not considered.
- 2.0 standards: The new version allows for post-hoc studies but considers them Promising (Tier 3).
Updated Attrition Rules
- 1.0 standards: Differential attrition of more than 15 percentage points between experimental and control groups led to rejection.
- 2.0 standards: Attrition must be similar between groups, and studies with differential attrition of more than 15 percentage points are rejected. Attrition is not expected to be assessed for post-hoc studies.
SEL and Attendance Measures Revised
- 1.0 standards: there were no specific criteria or guidelines for evaluating Social-Emotional Learning (SEL) and attendance measures. The primary focus was on academic achievement measures, and SEL and attendance were not considered separately.
- 2.0 standards: The new version introduces specific categories and variables for Social-Emotional Learning (SEL) and attendance studies. They have separate guidelines and color-coded markings for interpreting results.
Reporting and Summary Pages Revised
- 2.0 standards: SEL and attendance studies have more complex reporting displayed. Summary pages show outcomes and average effect sizes for categories, considering all variables measured within each category.
These changes aim to improve the evaluation process and enhance the quality of evidence in educational programs. By implementing requirements for comparison groups and specific study designs and avoiding certain analysis methods, we can better understand which programs have a positive impact.
The release of version 2.0 of the ESSA guidelines presents exciting new opportunities for expanded research in the field. Among the areas that have gained enhanced attention is social-emotional learning (SEL). The updated criteria have introduced specific categories and variables for SEL studies, providing a well-structured framework. This development creates an interesting and exciting opportunity for researchers to delve into various aspects of SEL interventions, including their effectiveness, the development of reliable measurement tools for assessing SEL outcomes, and investigations into the impact of SEL on key domains such as academic achievement, problem behaviors, social relationships, and emotional well-being.
Another area that has garnered increased significance is attendance. Including attendance measures as an additional outcome of interest in the new criteria highlights its pivotal role in educational contexts. Researchers now have a valuable opportunity to examine the correlation between attendance and academic achievement, explore interventions to improve attendance rates, and rigorously evaluate the effectiveness of programs targeting attendance.
When evaluating educational programs and choosing the most effective ones for your students, it’s crucial to consider these changes. Stay updated, continue learning, and make well-informed decisions to ensure the best possible education!