
Last month, I had the privilege of presenting at EdMedia 2024 in Barcelona, where I shared two distinct but complementary pieces of research that speak to the evolving landscape of educational technology and literacy instruction. The conference provided a fascinating backdrop for examining both the current state of evidence-based practice and the emerging challenges we face as artificial intelligence reshapes educational possibilities.
Two Studies, One Vision: Evidence-Based Educational Technology
I presented a paper session on our Lexercise study (full text preprint), which examined how technology-enhanced reading interventions can support struggling readers in grades 2-6. This research demonstrated clear correlations between program usage and decoding mastery, with students who used the hybrid intervention program for 25 or more hours showing significantly better outcomes than those with less usage. The study also revealed three distinct learner profiles—Early Learners, Consistent Progressors, and Complex Learners—each requiring different types of support and intervention strategies.
Our poster presentation focused on the 95 Phonics Core Program 4th and 5th grade study, a comprehensive evaluation of technology-enhanced Tier 1 phonics instruction in fourth and fifth grades. The quasi-experimental study showed that students using 95 PCP significantly outperformed their peers in both formative assessments (Istation) and summative state assessments (STAAR), particularly in comprehension and fluency domains. What struck me most was how the program addressed a critical gap in teacher preparation—providing systematic support for educators who may lack deep training in advanced phonics and morphology instruction.
Both studies underscore a fundamental principle that guided much of my work: technology’s value lies not in replacing human instruction but in enhancing it systematically and sustainably. The structured approach of both programs allowed teachers to deliver high-quality, evidence-based instruction while maintaining the human connection essential for effective learning.
The AI Paradox: Promise and Peril in Educational Research
The conversations at EdMedia revealed a striking paradox in our field. While we celebrate the transformative potential of artificial intelligence in education, particularly in writing tools and personalized learning, we’re simultaneously grappling with a research crisis that threatens our ability to understand and harness these innovations responsibly.
Multiple presenters highlighted the concerning lack of high-quality research on AI in K-12 and higher education settings. The research that does exist often suffers from methodological limitations that make it unsuitable for meta-analyses or systematic reviews. This creates a dangerous cycle: without robust evidence, we struggle to make informed decisions about AI implementation, yet the rapid pace of AI development makes traditional research approaches feel inadequate.
The challenge is particularly acute in writing instruction, where AI tools are being adopted faster than we can study their effects. Teachers and administrators are making decisions about AI writing tools based on limited evidence, often relying on vendor claims rather than rigorous, independent evaluations. This echoes familiar patterns in educational technology adoption, where enthusiasm outpaces evidence.

The Innovation-Implementation Gap
Perhaps the most thought-provoking discussions led me to what I’m calling the “innovation-implementation gap.” AI technologies offer unprecedented opportunities for the transformative redesign of educational experiences—personalized learning pathways, real-time feedback systems, and adaptive content delivery that responds to individual student needs in ways we’ve never before imagined.
Yet our educational systems, with their established structures, policies, and cultures, cannot adapt as quickly as AI capabilities evolve. Schools operate within frameworks designed for stability and consistency, while AI demands experimentation and rapid iteration. This mismatch creates tension between what’s technically possible and what’s practically achievable.
The gap extends beyond technical challenges to fundamental questions about pedagogy and purpose. If AI can provide instant feedback on student writing, how can we maximize educators’s role the application of writing? If algorithms can identify learning gaps in real-time, how do we maintain the human judgment necessary for educational decision-making long term? These questions require not just technical solutions but thoughtful consideration of educational values and goals.

Bridging Research and Practice
The contrast between my research presentations and the AI discussions illuminated an important truth: we need both rigorous evaluation of current technologies and innovative approaches to studying emerging ones. The structured, evidence-based approach that proved effective for reading interventions can inform how we evaluate AI tools, even as we develop new methodologies suited to AI’s unique characteristics.
The learner profiles we identified in the Lexercise study—Early Learners, Consistent Progressors, and Complex Learners—remind us that technology’s impact varies significantly across different student populations. The Consistent Progressors were the students who used the program 4 days a week and most weeks during the school year. By understanding the characteristics for each profile, educators can better tailor communications as part of relationship building and differentiate support. This insight becomes even more crucial as we consider AI implementations, where one-size-fits-all approaches are unlikely to succeed.
Looking Forward
Barcelona provided a valuable reminder that educational technology research must balance innovation with rigor. While AI offers exciting possibilities for transforming education, we cannot abandon the careful, systematic approach that has proven effective in evaluating educational interventions. Instead, we need to refine our definition of high-quality evaluation and research to accommodate AI’s rapid evolution while maintaining the standards of evidence that protect students and guide sound educational practice.
The path forward requires collaboration between technologists, educators, and researchers. We need studies that move beyond simple pilot programs to examine long-term impacts across diverse student populations. We need research methodologies that can keep pace with technological change without sacrificing rigor. Most importantly, we need continued commitment to evidence-based practice, even as we explore the transformative potential of artificial intelligence.
The conversations in Barcelona reminded me why this work matters: every educational technology decision affects real students in real classrooms. Whether we’re implementing structured reading interventions or exploring AI-powered writing tools, our responsibility remains the same—to ensure that innovation serves learning and that evidence guides implementation.
As I returned from Barcelona, I carried with me both the satisfaction of sharing solid research findings and the challenge of addressing the research gaps that could shape the future of educational technology. The work continues, with renewed urgency and purpose.