Skip to content
Home » 85% of Edtech Products have No Research Evidence. How can we change that?

85% of Edtech Products have No Research Evidence. How can we change that?

85% of EdTech Products Have No Research Evidence — LXD Research

Evidence for edtech products exists — but it lives in a narrow, well-funded sliver of the market. Across the 2,135 products indexed by ISTE, only 15% hold any evidence-based certification. Narrow the lens to the ~100 most-used tools in U.S. K–12 schools, tracked annually by Instructure’s LearnPlatform, and that figure rises to 40%. Evidence exists where the money and the attention are concentrated; outside that sliver, documentation is the exception.

And even inside the sliver, the picture is fragile. Three years of LearnPlatform data show modest progress, a 2025 spike driven by subsidized research, and a 2026 list that churned — new products displaced old ones, and the new entrants hadn’t done the research.

Finding 1 · Market contrast (2026)
Evidence exists in a small commercial sliver — not across the broader market.
Top 100 most-used tools · LearnPlatform 2026 ~4% of the full market 40% have ESSA evidence Full market · ISTE EdTech Index 2026 2,135 products 15% have any certification 0% 100%
The LearnPlatform figure counts products with any of the four ESSA evidence tiers. The ISTE Index figure counts products with any formal evidence-based certification across multiple frameworks. Sources: LearnPlatform 2026, LXD analysis of ISTE EdTech Index.
Finding 2 · Three-year trend
The 2025 spike didn’t hold — not because certifications expired, but because the list churned and new products without research took their place.
50% 40% 30% 20% 10% 0% 36% 2024 report (fall 2023 data) 49% 2025 report (fall 2024 data) 40% 2026 report (fall 2025 data)
Level I (RCT) Level II (QED) Level III (Correlational) Level IV (Logic Model)
Labeled by publication year; each report draws on fall data from the preceding year. Figures are the share of the top ~100 edtech tools in each report. The 2026 denominator also includes newly categorized SIS, safety, and parent communication platforms. Sources: LearnPlatform 2024, 2025, 2026.
At a glance

The Bottom Line

  1. 85% of indexed edtech products have no formal evidence of any kind. Only 15% of the 2,135 products in the ISTE EdTech Index hold any evidence-based certification.
  2. Incentives work. The Jacobs Foundation’s LEIF program helped drive Level IV evidence from 9% to 22% in a single year.
  3. The 2025 spike didn’t stick, but not the way it looks. Certifications don’t expire — the list churns. New products entered the top 100 in 2026 without research, diluting the overall evidence rate back to 40%.
  4. The bar is lower than both buyers and founders assume. Level IV is documentation, not a study. Listing and application fees for the major public directories range from free (ISTE Index) to $750 (Digital Promise).

What the Reports Actually Measure

The starting point for each report is a list of the 100 to 150 most-accessed tools on the LearnPlatform browser integration during the fall. That selection methodology is worth sitting with. The average K–12 district uses around 2,500 to 3,000 digital tools in a given year. These reports examine roughly the top 4% — and only the tools that surface during a window when schools are running diagnostic benchmarks, onboarding new platforms, and administering beginning-of-year screeners. Fall is assessment season, and the lists reflect that.

More importantly, “most accessed” measures touchpoints, not instructional engagement. A student who logs into i-Ready three or more times over multiple days to complete a mandatory fall diagnostic — as every student in a district typically does — generates a larger usage signal than a student who uses a carefully chosen instructional tool twice a week all year. That diagnostic session is a measurement event, not a learning interaction, and the methodology cannot tell the difference. The practical consequence is that high-frequency mandatory assessments structurally crowd out instructional tools in the rankings — which means the list may be telling us more about district compliance requirements than about what educators are choosing for learning.

This points to a more fundamental design question: evidence standards should follow outcome categories, not a single unified ranking. ISTE’s own Seal program has already grappled with this — the redesigned Seal evaluates products across four distinct categories: curriculum, assessment tools, learning platforms, and professional development solutions. In a formal analysis of interim assessment products, ISTE concluded that they “don’t easily lend themselves to the current Seal framework” because evaluating them requires psychometric rigor, curricular alignment, and longitudinal data — not the same criteria as instructional tools.

The Instructure report would benefit from the same logic. Assessment tools like i-Ready, FastBridge, and MasteryConnect are valuable, but the right evidence question for them is whether they improve MTSS decision-making, predict student outcomes accurately, or drive actionable instructional responses — not whether they produce ESSA learning outcome research. A report that separated instructional tools, assessment tools, and operational platforms — and asked the right evidence question of each — would be far more meaningful than a single ranked list that mixes all three. A student who logged into Sora — a public library checkout app — to browse ebooks looks the same as one who completed a structured reading assignment. Working with dozens of schools at any given time, we see this constantly: access events and instructional interactions are not the same thing, and this methodology cannot distinguish between them.

This isn’t a hypothetical redesign. The first Instructure report in this series — the 2024 edition, drawing on fall 2023 data — took exactly this approach. Its top 40 was organized across six solution categories (LMS, Courseware, Supplemental, Assessment, Study Tools, and Sites & Resources), with the top 10 surfaced in each. That structure acknowledged what the current unified ranking obscures: these products are doing fundamentally different jobs, and the right evidence question depends on the category. Subsequent editions collapsed those distinctions, and the interpretive cost of that choice has grown each year.

The 2026 list makes the heterogeneity problem more visible. To its credit, Instructure introduced a separate “consumer technology” category for tools like ChatGPT, Netflix, Reddit, and Roblox — a sensible distinction. But the consumer list also includes Google Docs, Zoom, Wikipedia, and Spotify, while the edtech list absorbed a new cluster of student information systems, safety monitoring tools, and parent communication platforms: Infinite Campus, PowerSchool, Skyward, Frontline Education, GoGuardian Teacher, Securlypass, SmartPass, ParentSquare, and Smore. These products serve real functions in schools and could in principle have meaningful evidence — a hall pass app showing reduced hallway disruptions, or a parent communication platform showing increased family engagement, would be legitimate research. The problem is evaluating them against an ESSA framework designed for learning outcomes. Their presence in that denominator depresses the evidence percentages in a way that says nothing useful about the companies actually building curriculum and intervention products.

Three Years of Numbers

The trajectory shown in the chart above — 36% in the 2024 report, 49% in 2025, 40% in 2026 — tells a particular story. The headline number moved, but it moved unevenly and then diluted. What drove most of that movement was a single tier.

Level I (the randomized controlled trial standard) barely registered across all three reports, hovering between 2% and 5%. Level II (quasi-experimental) was equally flat at 4–5%. Level III (correlational) actually declined slightly, from 18% to 17% to 14%. The action was almost entirely at Level IV — the logic model tier — which jumped from 9% to 22% between the 2024 and 2025 reports before settling back to 19% in 2026. That single segment accounts for nearly all of the movement in the “any evidence” total.

!
The 2025 Spike

Level IV more than doubled in one year — from 9% to 22% — then partially diluted as new products joined the list without research. That didn’t happen organically.

The 2025 Spike, and What Happened Next

The jump in Level IV — from 9% in 2024 to 22% in 2025 — is the most striking number in this sequence. It did not happen organically. The Jacobs Foundation’s LEIF program funded LeanLab, WestEd, and WiKIT (now the International Centre for EdTech Impact) to provide ESSA evidence services to edtech companies with 50% matching funding. The Foundation also commissioned Instructure/LearnPlatform to conduct a baseline evidence assessment of their portfolio, which found only 21% of K–12 companies had conducted an independent study — a number LEIF was explicitly designed to move. Separately, AWS directly sponsored a batch of ESSA Tier IV logic models that LearnPlatform produced in time for the SXSW EDU exhibit hall in early 2024, paying for the documentation itself rather than just the visibility around it.

The 2025 report captured the industry during and just after that window. The Level IV spike reflects that climate. But the 2026 drop back to 40% is worth reading carefully, because it doesn’t mean companies lost certifications. Evidence credentials don’t expire. What happened is that the list itself churned: products that had been certified during the subsidy window weren’t necessarily the same products on the 2026 list. New entrants — including the SIS, safety monitoring, and parent communication platforms newly categorized into the edtech set — hadn’t done the research. The evidence rate didn’t retreat so much as it got diluted by newcomers.

The lesson is not that 2025 was inflated. It is that evidence is a per-product investment — every new tool entering the market starts at zero — and incentive structures move the needle when they’re in place.

InnovateEDU stepped in as Instructure’s nonprofit partner for the 2026 edition after the Jacobs Foundation shifted its focus — a meaningful signal that institutional commitment to this work continues. InnovateEDU CEO Erin Mote is one of the genuine leaders in this space, and her case for outcomes-based contracting — where districts tie payment to demonstrated student results rather than purchasing on faith — points toward a more durable solution than grants alone. If the purchasing relationship itself demands evidence, companies respond without waiting for a subsidy.

The Evidence Ladder Is Meant to Be Climbed

There is a common misconception that “doing research” means commissioning a randomized controlled trial. It does not — at least not to start.

Level IV asks for a logic model: a clear articulation of what problem the product solves, for whom, how it works, and what outcomes to expect. This is documentation, not a study. Level III asks for correlational evidence — a credible study showing that students who use the product do better, without a control group or random assignment. That is a reasonable minimum for any product that has been in schools long enough to generate data. Level II — where the most-used tools in the market should be aiming — is a quasi-experimental design comparing outcomes between students who used the product and a matched comparison group. It does not require a randomized trial. It is achievable with existing data and the kinds of district partnerships that established products already have.

This mismatch between perception and requirement runs in both directions. District buyers often assume “evidence-based” implies more rigor than a logic model actually requires — or assume rigorous evidence is out of reach for the products on their shortlists and don’t think to ask. A Tier IV claim is meaningful, but it is meaningfully different from a Tier II one, and districts have every right to know which they’re getting.

And here is the harder pattern the three reports reveal: the companies that reached Level IV haven’t moved up. The same commercially successful tools show up in the top rankings year after year. They have the data. They have the district relationships. They have the implementation history. What they don’t have, three years later, is a study — and it shows in the numbers. Level III held roughly flat (18% to 14%) across the three reports. Level II barely moved (4% to 5%). Level I actually collapsed, from 5% to 2%. The ladder isn’t being climbed, even by the products best positioned to climb it.

Not every product needs a randomized controlled trial. But there should be far more companies at Level II than there currently are — especially among the most-used tools, which have the implementation history, the data, and the district relationships to run a credible quasi-experimental design. A QED is achievable, defensible, and already within reach for most products with three years of real-world use. That trajectory isn’t happening on its own. Read our practical breakdown of what each ESSA tier actually requires →

The Broader Picture

The Instructure reports capture the most commercially successful slice of the market. To understand what the full market looks like, LXD Research analyzed the ISTE EdTech Index in April 2026 across its complete catalog of 2,135 products.

15%
Any evidence-based
certification
327 of 2,135 products
3%
Inclusive design
certification
58 of 2,135 products
85%
No formal evidence
of any kind
The silent majority

Only 58 products (under 3%) hold a certification in co-design, design for learning, or learner variability — inclusive-design standards that are generally less expensive to pursue than empirical research, yet rarer. For roughly 85% of indexed products, there is no formal evidence documentation of any kind.

The Instructure top-100 figures, modest as they are, describe a privileged corner of the market. Every product on that list has achieved real commercial traction. And yet a refrain runs through sales conversations with districts: rigorous evidence is too expensive to produce. Educators, without context to evaluate that claim, tend to accept it. They shouldn’t. The cost of being publicly recognized for whatever evidence a product already has is often less than a single conference booth.

What It Costs
Application and listing fees for the major public evidence directories
Free
Directory Listing
$250
Application Fee
$650–750
Certification Fee

The research work itself adds to those figures, but a Tier IV logic model is within reach of any product team that can articulate how their tool is supposed to work. The more honest barrier is usually expertise rather than money — and that is a solvable problem with the right research partners.

What Happens Next Is a Choice

Three years of data is enough to see a pattern: modest progress, incentive-driven spikes, and persistent gaps. That is not cause for despair. It is cause for more deliberate effort from every side of the market — the companies making claims, the funders shaping incentives, and the districts that ultimately decide which tools reach students. The students using these tools every day deserve to know whether they work.

For District Leaders
Use Evidence as Your First Filter

You can’t take every sales meeting, and you shouldn’t. Ask every vendor where they sit on the ESSA tier ladder and where they’re listed — the ISTE EdTech Index, EduEvidence, or Digital Promise. Companies with real evidence answer in a sentence; companies without it tend to caveat, improvise, or move on. The question sharpens your shortlist and protects your calendar — and it raises the bar for the industry at the same time.

For Edtech Companies
Start With a Logic Model

Partner on a correlational study. Commission an independent evaluation before your next funding round. The districts buying your products are increasingly being asked to justify those purchases to boards, parents, and state agencies. Help them do that — and the companies that get there first will have a durable competitive advantage.

For Investors & Foundations
Targeted Funding Moves the Needle

The 2025 data proved it. But the 2026 drop also reveals something subtler: evidence is a per-product investment, not a one-time industry achievement. Every new entrant starts at zero. Companies with genuine impact often can’t afford rigorous research; others can, but haven’t been given a reason to prioritize it. Research grants tied to evidence milestones, subsidized research partnerships, and recognition programs all have demonstrated track records. This is ongoing work, not a program with a finish line.


Where Does Your Product Sit on the Evidence Ladder?

LXD Research helps edtech companies build the evidence infrastructure that districts, investors, and funders require — starting with a logic model and growing from there. We offer a free consultation to help you assess the most practical next step for your product and stage.

Schedule a Free Consultation View Our 2026 Services