During more than seven years leading investments across education innovation, ed tech, and R&D at a large national philanthropy, I reviewed hundreds of proposals, tracked dozens of pilots, and championed promising ideas as they moved (or failed to move) from concept to classroom. Across my work, the same pattern kept emerging.
Ideas were ambitious and backed by strong teams, compelling research, demonstrated demand and promising early pilots. But specific, observable changes in teaching and learning were harder to find than they should have been. The gap between a compelling innovation narrative and a clear answer to the question: what will be different for students and teachers if this works? was often where the work either held up, or fell apart.
The projects that showed the most promise shared something specific: a set of disciplines that kept the work honest as it moved from idea to implementation and beyond. Those that struggled skipped these disciplines, because the pressures of fundraising, product development, market differentiation, and scale all pulled developers away from answering hard questions. The work required to translate an idea into classroom reality is deeper, slower, and more iterative than funding cycles and product roadmaps account for.
Drawing on the patterns I observed, here are five lessons I am taking with me from my years in philanthropy, and ones that I hope to continue to explore alongside the educators, researchers, and builders doing this hard work.
Lesson 1: Outcomes Discipline Design
Too many innovation efforts begin with what a product or intervention will do, rather than what will change for learners. I consistently saw teams leading with market opportunity, differentiation, features, or demand signals. But few were able to answer with specificity what will actually change in a classroom as a result.
Filling a market gap is a reasonable starting point, but it is only a first step toward improving K-12 education. The most promising efforts I saw began with real outcomes: concrete, observable shifts in student understanding, teacher decision-making, or instructional practice, and built backward from there.
This discipline does two things. First, it drastically improves design. When outcomes are explicit, trade-offs become visible and teams can prioritize what drives learning, rather than what demonstrates well. Second, it forces alignment. Funders, developers, researchers, and educators can organize around shared definitions of success, rather than working toward different versions of what each believes success should look like. When outcomes are specific up front, measurement becomes part of design rather than an afterthought. And when measurement is built in from the start, the feedback loop between what you’re building and whether it’s working gets much shorter.
In a field under pressure to improve quickly, and a time of significant technological acceleration, clarity around real outcomes must be the anchor. This clarity around outcomes is what makes growth legible to funders, to partners, and to the team itself. Importantly, teams that can answer that question precisely tend to be the ones ready to grow.
In a field under pressure to improve quickly, and a time of significant technological acceleration, clarity around real outcomes must be the anchor. This clarity around outcomes is what makes growth legible to funders, to partners, and to the team itself. Importantly, teams that can answer that question precisely tend to be the ones ready to grow.
Lesson 2: Friction Is Information
Education systems are slow. Procurement cycles are long, student outcome data arrives later than promised, and building trust with educators takes time. You can’t speed up the system, but you can accelerate your learning within it.
Across investment portfolios, I watched promising efforts discover critical misalignments late, like after a full product was built, or worse, at the end of an entire pilot year. These failures were avoidable, not by speeding up the system, but by building more honest learning loops and iteration cycles within it.
This requires a shift in mentality for the entire field. Early friction, misalignment, and moments where something simply doesn’t work are not setbacks to manage or hide. They are actually some of the most valuable data that a team can collect and make actionable. It’s only a failure if you find out at the end of the school year.
Moreover, the entire field benefits from innovators treating misalignment not as failure, but as key data points, and sharing that learning openly rather than quietly absorbing it. A culture that rewards polished pilots over honest iteration is one that learns too slowly to meaningfully improve. The teams that learn fastest are rarely the ones with the most resources. They are the ones that decided early that friction was information.
Early friction, misalignment, and moments where something simply doesn't work are not setbacks to manage or hide. They are actually some of the most valuable data that a team can collect and make actionable. It’s only a failure if you find out at the end of the school year.
Lesson 3: The Classroom Is Not A Neutral Environment
Innovation in K-12 does not land in a vacuum. It lands in a classroom: a dense, demanding environment with existing rhythms, constraints, and competing pressures. Time is scarce, cognitive load is heavy and educators are managing dozens of priorities simultaneously. New tools and innovations enter a context already full of initiatives, mandates, expectations and competing demands.
Given this context, tools often fail not because teachers resist them, but because the conditions of real teaching and learning were never fully accounted for during design. Usability issues that may seem minor in a demo or very small pilot become insurmountable in a real classroom.
The innovators and teams I saw make progress treated these constraints not as obstacles, but as their design context. They built genuine partnerships with educators as collaborators with real influence over how tools were shaped and refined. They brought users into the process early and kept going back. They understood that teacher expertise is essential technical knowledge about the environment their product has to survive in.
This kind of partnership is hard to build because it requires a willingness to slow down and to let classroom realities complicate a roadmap. But it is one of the most reliable ways to build something that actually works and that teachers will actually use.
Innovation in K-12 does not land in a vacuum. It lands in a classroom: a dense, demanding environment with existing rhythms, constraints, and competing pressures. Time is scarce, cognitive load is heavy and educators are managing dozens of priorities simultaneously. New tools and innovations enter a context already full of initiatives, mandates, expectations and competing demands.
Lesson 4: The Median Matters More Than The Peak
Education has a “bright spots” problem.
When an innovation works in one place, or a program produces striking gains in another, our instinct is to celebrate, document, and scale. This pattern shows up across education reform.
When national reading scores disappoint, our focus turns to the two states that bucked the trend. We study the peak, extract lessons to implement elsewhere, and set aside the question of what was happening in the other forty-eight states.
These bright spots may make for compelling case studies or persuasive pitch decks and fundable narratives. But bright spots are actually statistical outliers. In any rigorous analysis, outliers are handled carefully. Researchers examine them by trying to understand what drove the exceptional result, and then asking the harder question: does the effect hold across conditions? Researchers don’t build conclusions on outliers. They test whether findings replicate across settings, populations, and contexts. They look at distributions, not peaks, and they want to understand what is happening at the median, not just at the top of the range.
Education innovation deserves the same discipline. Bright spots aren’t meaningless, they are just statistically insufficient. Scaling requires understanding the average conditions, not exceptional ones.
The projects I saw that produced long-term evidence of impact were the ones willing to study the full range of ordinary conditions under which a tool or program was actually used. Not the champion teacher in the innovative school, but average teachers managing competing priorities, inconsistent access, and varying levels of support. Not the ideal dosage, but the dosage that got delivered in classrooms.
If an approach only works under exceptional conditions, that is not a finding to scale, it is a finding to interrogate. The question worth asking of every promising bright spot is not “how do we replicate this?” It is “what happens when we remove the exceptional conditions that made this possible?”
Effects that hold across settings, persist over time, and generalize beyond early adopters are the ones that are worthy of replication.
Bright spots are actually statistical outliers. In any rigorous analysis, outliers are handled carefully. Researchers examine them by trying to understand what drove the exceptional result, and then asking the harder question: does the effect hold across conditions?
Lesson 5: No Single Person Carries Innovation Alone
K-12 is distinct from other sectors. Ed tech entrepreneurs often assume that if the product is strong enough, the market will move. In education, it rarely works that way because adoption is relational and reputational. Therefore, tools tend to move when trusted actors validate them, contextualize them, and help schools and systems integrate them into existing practice.
Across my portfolios, a pattern emerged. The efforts that gained real traction were the ones embedded in networks. They were connected to intermediaries that aggregated demand, trusted research partners that translated evidence into practice, to field-builders that understood state and districts. These organizations created the infrastructure and the conditions for innovation to succeed.
This ecosystem layer is essential, and it is often undervalued relative to the innovations it moves. Sustaining organizations that build trust, translate evidence, and connect developers to districts is not just background infrastructure. It is critical to improving outcomes for students and teachers.
For developers, participating in shared R&D efforts, aligning with established networks, contributing to common standards, and allowing partners to shape design are not compromises on autonomy. They are accelerants to achieving outcomes.
As exciting as technological advances are, the next chapter of education innovation will not be written by lone breakthroughs. It will be built by the networks who make those breakthroughs possible, and by a field willing to do the hard work of creating the conditions for impact.
