The Gap-Builder
Establish critical validity and reliability milestones for your assessent tool.
Bridging the Gap: Introducing Research-Validated Measure Expert Reviews
In the complex world of educational assessment, timing is everything. Many companies develop innovative assessment tools but find themselves in a challenging waiting game when it comes to formal state or national reviews. Today, we’re excited to introduce a game-changing service that helps EdTech companies navigate this critical path: Research-Validated Measure Expert Reviews.
The Assessment Validation Challenge
Educational assessment tools face an increasingly rigorous landscape. Formal reviews by bodies like the National Center on Intensive Intervention (NCII) are highly competitive and infrequent. Companies can spend years developing exceptional tools, only to find themselves stuck in a validation bottleneck.
What Are Research-Validated Measure Expert Reviews?
Our new service provides a crucial intermediate milestone for assessment tools. It’s not just a preliminary review—it’s a comprehensive expert evaluation that:

🏅 Confirms your tool’s effectiveness for users
🔍 Provides in-depth technical analysis
📊 Offers actionable insights for improvement
🚀 Prepare your tool for future formal reviews
Why Should an Assessment Be Validated?
For Assessment Developers
- Get expert guidance before formal review cycles
- Identify and address potential validation gaps
- Build a robust evidence base incrementally
For Educational Institutions
- Gain confidence in emerging assessment tools
- Access more innovative measurement approaches
- Support the development of cutting-edge educational technologies
What does it look like to validate our assessment measure?

A Bridge, Not a Destination
It’s important to understand that these expert reviews are not a replacement for formal certification. Instead, they’re a strategic stepping stone—a way to build momentum and evidence while preparing for more comprehensive reviews. Each step of the validation process will help with sales, recruiting more research sites, and improving the implementation of users.
Who Can Benefit?
- EdTech startups developing assessment tools
- Established educational technology companies
- Curriculum developers creating new measurement approaches
- Researchers exploring innovative assessment methodologies
Case in Point
Consider the journey of a recent client. Their reading assessment tool was created to accompany their curriculum, but with a few adjustments, it was validated as a diagnostic tool. Results showed that the 95 Phonic Core Program Unit Assessments were positively correlated with iReady Reading and STAAR. (View the brief here.) Our expert review helped them:

- Identify ways to improve measurement consistency
- Strengthen their statistical evidence
- Develop a clear roadmap for future validation
Getting Started
Ready to take your assessment tool to the next level? Our team of Ph.D. researchers, led by Dr. Rachel Schechter, brings decades of experience from leading educational publishers and research institutions.
Conclusion
In educational assessment, standing still means falling behind. Our Research-Validated Measure Expert Reviews offer a proactive approach to tool development, helping innovative companies turn promising ideas into robust, trusted assessments.

95-PCP-Unit-Assessment-Validity-Brief.pdf
This study found sufficient evidence indicating that 95 Phonics Core Program Unit assessments are valid literacy measures in the context of 95 PCP program implementation. This finding is based on indicators of internal and external validity in a relevant, representative sample of U.S. students. First, 95 PCP Unit Assessments showed strong correlations within the various checkpoints throughout the year, especially unit tests in close proximity and with overlapping domains. The relative strength of these internal correlations showed convergent validity and supported the stationarity assumption of repeated measures within the same domain. In sum, the analysis of internal validity showed that the assessments reliably measured the literacy skills relevant to the 95 Phonics Core Program.

Edpuzzle Tier III Efficacy Brief
This study found sufficient evidence indicating that Edpuzzle’s teacher-created and product-created quizzes are valid measures for understanding student math and reading performance. Star Math and Reading assessments are computer-adaptive assessments that assess K-12 students’ growth in literacy and math skills deemed appropriate for their grade level. Nineteen pairs of scores were compared across both assessments. Ninety-five percent of these correlations were of large or medium strength, indicating that students who scored higher on Edpuzzle quizzes performed well on the Spring 2023 Star assessment. The relationships were particularly strong for grades 5 and 6 for the Edpuzzle Originals. For all math Edpuzzles, the scores showed a large effect in half of the grades measured.