Skip to content
Home » Webinar Reflection – Edtech Insiders’ The Future of AI Tutoring: Building What Actually Works

Webinar Reflection – Edtech Insiders’ The Future of AI Tutoring: Building What Actually Works

Co-Authored by Talia Patt Ed.M. and Rachel Schechter, Ph.D.

The Future of AI Tutoring: Building What Actually Works — LXD Research

As AI tutoring continues to gain traction, the conversation has started to shift from what’s possible to what actually works for learning. That’s a meaningful transition — and it’s one worth taking seriously.

In a recent webinar on the future of AI tutoring hosted by EdTech Insiders, researchers and product leaders highlighted a key shift: effective AI tutoring isn’t just about powerful models — it’s about intentional design, strong pedagogy, and thoughtful implementation. Speakers included James Donovan (Head of Learning & Cognitive Outcomes Research at OpenAI), Irina Jurenka (Research Director, Learning, at Google DeepMind), Kristen DiCerbo (Chief Learning Officer at Khan Academy), and founders from Kira and SuperNova. The field is maturing, and with that maturity comes both new clarity and new responsibility.

Start With What Students Need, Not What Technology Can Do

A consistent theme across all the webinar speakers was the importance of grounding AI teaching tools in learning science — from the very beginning of the design process. Rather than asking what AI is capable of, teams are increasingly asking what students need to learn and how AI can support them in that process.

This shift in framing matters more than it might seem. It requires starting with a theory of action: if students have access to high-quality tutoring interactions, they should become more cognitively engaged, which in turn should lead to stronger learning outcomes. That logic chain needs to be explicit, testable, and central to the design — not an afterthought.

The core design question isn’t “what can AI do?” — it’s “what do students need, and how can AI reliably deliver it?” The difference in framing shapes everything that follows.

Effective Tutoring Lives or Dies in the Details

One of the clearest takeaways from the conversation was that effective tutoring depends on context — and, within each context, getting the details right. Tutoring is made up of micro-interactions, each of which has the potential to help or harm learning. Delays in response, incorrect answers, or poorly calibrated feedback can lead to distraction and disengagement before a student has had a chance to do any real thinking.

Strong AI tutoring tools, then, have to be designed with the student experience at the center: tailored to specific learning contexts and use cases, and genuinely capable of meeting students where they are — not where the tool assumes they should be. That requires both technical rigor and deep knowledge of how learners actually behave in educational settings.

The Problem Isn’t Capability — It’s Engagement

Current data points to a troubling gap. In many implementations, students are passively interacting with AI and technology, rather than engaging in the kind of active thinking that actually drives learning. Clicking through prompts, receiving answers without wrestling with them, consuming without producing — these patterns are easy to build into a system and hard to design out of one.

This highlights the central challenge for the field: how do we ensure AI tools promote active engagement, not passive consumption? The distinction isn’t a minor UX detail. Passive engagement can look like learning from the outside while producing little of lasting value for the student. It can inflate usage metrics while understating the depth of cognitive work happening — or not happening.

Passive Interaction
Consumption Without Cognition

Students receive responses and move on. High session counts, low retention. The tool does the thinking; the student watches.

Active Engagement
Interaction That Demands Thinking

Students are prompted to reason, retrieve, and apply. The tool scaffolds the process without completing it. Learning actually happens.

Closing this gap requires design teams to treat learning science not as a validation step at the end of development, but as the foundation from which every product decision flows.

Watch the full webinar

The EdTech Insiders panel — featuring researchers from OpenAI, Google DeepMind, and Khan Academy — goes deeper on what current evidence actually shows about AI tutoring effectiveness and how to design for active engagement. Watch “The Future of AI Tutoring: Building What Actually Works” on YouTube →

The Best AI Tutors Learn From Humans — They Don’t Replace Them

Another key takeaway from the conversation was that effective AI tutors should not try to replicate human instructors wholesale — but neither should they ignore them. The most promising approaches study effective human tutoring interactions, start from human-vetted instructional materials, and involve teachers meaningfully in the creation process.

This is a different framing than the “AI as substitute teacher” narrative that still circulates in parts of the edtech industry. When AI is positioned as an extension of effective teaching practice rather than a replacement for it, the design space opens up considerably. The most valuable tools find ways to combine both, rather than pit them against each other. Two programs in the ELA space illustrate what that looks like in practice.

Case Study · Early Reading
Once

Once pairs school support staff with kindergarten and first-grade students for daily one-on-one tutoring grounded in the Science of Reading. AI functions not as the tutor but as a learning layer on top: the platform reviews session recordings, generates highlight reels for coaches, and surfaces patterns to sharpen instructor quality over time. The AI learns from the human tutors. LXD Research’s 2024–25 quasi-experimental study found an effect size of 0.46 SD for kindergarteners below the 50th percentile nationally who received 80 or more sessions — a statistically significant result that compares favorably to other high-quality tutoring programs.

Case Study · ELA
Coursemojo

Coursemojo’s AI teaching assistant, Mojo, is designed to integrate into existing classroom routines rather than replace them — supporting individual students, small-group facilitation, and writing feedback while the teacher leads instruction. Founded by two former teachers-turned-principals, the product was built on the premise that AI should handle the support work so teachers can focus on the irreplaceable parts of their role. Mojo has demonstrated ESSA Tier II impact in Aldine, TX and Sumner County, and shows particularly strong gains for emerging bilingual students.

The most promising AI tutoring approaches are not all-AI or all-human — they are a thoughtful combination of both. The question is not whether to involve teachers, but how to design that involvement in a way that actually improves outcomes.

We’re Still Early — and That’s Worth Taking Seriously

Across the entire webinar conversation, there was a shared recognition that the field is still in its early stages. The promise is real. But so is the need for continued iteration, testing, and honest learning over time. There are no mature playbooks here yet — and the decisions made now about how to design and evaluate these tools will shape the trajectory of the field for years.

That’s worth taking seriously rather than papering over with optimism. Being early creates genuine opportunity: the chance to build AI tutoring tools the right way, grounded in pedagogy, informed by teachers, and designed for meaningful student engagement. But early-stage fields are also susceptible to the pressure to scale before there’s sufficient evidence — to let usage metrics substitute for learning outcomes.

The future of AI tutoring is not about replacing teachers or automating learning. It’s about designing tools that support how learning actually happens — and being willing to do the hard work of finding out whether they do. When built intentionally, AI tutoring has the potential to engage students in deeper thinking, promote personalized learning experiences, and strengthen rather than substitute the role of teachers.

The most effective AI tutoring solutions will be the ones that start with learning and with humans, and build technology in service of both.


Does Your EdTech Program Have the Evidence to Back Its Claims?

LXD Research specializes in designing and conducting research studies on edtech programs, with a focus on providing meaningful evidence that helps educators understand how programs work and under what conditions. Visit lxdresearch.com to learn more about our work or to access our research on educator decision-making.

Explore Our Research View Our Services