Over the past few years, artificial intelligence has moved from the margins of educational technology to being deeply embedded in it. Large language models have enabled adaptive tutors, automated feedback, and new ways for educators and learners to interact with content. These advances have already reshaped expectations for what ed tech can deliver.
But the next phase of AI in education is no longer just about generating text or automating routine tasks. It is about systems that can reason through problems, take action over time, and perceive learning signals beyond written input. Together, these frontier AI capabilities are beginning to shift how learning tools are designed, how instruction is supported, and how progress is understood.
Across the ed tech field, a growing number of tools are applying these advanced approaches to move beyond basic chat interfaces or static personalization. The result is a new generation of learning technologies that emphasize sustained thinking, adaptive guidance, and richer interaction between learners, teachers, and systems.
Models That Can Think Through Problems
Traditional large language models (LLMs) are designed to predict the most likely next word in a sequence. While this approach has proven remarkably powerful, it also comes with well-known limitations. Models can produce confident-sounding but incorrect answers, struggle with multi-step reasoning, and lack the structure needed for reliable performance on complex tasks.
Large reasoning models (LRMs) represent a meaningful shift. Rather than jumping directly to an answer, these systems generate intermediate steps, leverage these steps to avoid errors and develop more complex chains of thought, and work through problems more deliberately. For education, this matters because learning is rarely about arriving at an answer as quickly as possible. It is about understanding how to get there.
In classroom contexts, reasoning-oriented AI opens new possibilities for supporting student thinking. Beyond simply checking whether an answer is correct or incorrect, reasoning LLMs can more reliably be prompted to diagnose when and why a learner’s reasoning went off track (i.e., knowledge tracing), and to deliberate on higher-level strategies over a larger context to prompt reflection or model expert approaches. This is particularly valuable in mathematics, science, and writing, where process and structure are central to mastery.
Reasoning-oriented AI opens new possibilities for supporting student thinking. Beyond simply checking whether an answer is correct or incorrect, reasoning LLMs can more reliably be prompted to diagnose when and why a learner's reasoning went off track.
For teachers, reasoning models can support instructional planning and feedback in more nuanced ways. Rather than generating generic explanations or practice items, these systems can help design sequences of tasks that build conceptually, anticipate common misconceptions, and adapt explanations based on how students reason rather than how they perform.
At the system level, reasoning-focused AI challenges how assessment and evaluation are designed. Rather than needing to manually and tediously design LLM rubrics, more autonomous reasoning LLMs can help identify rubric weaknesses and probe assumptions, and make it easier to develop formative assessments that are diagnostic and responsive. Still, it also raises questions about transparency, accountability, and how much of a model’s internal reasoning should be exposed to learners and educators.
The computational cost of reasoning models remains a constraint. The more deliberate “thinking” by the LRMs require more time and resources. But as these systems become more efficient, the tradeoff increasingly favors accuracy, explainability, and instructional value over speed alone.
Agentic AI in Learning Environments
Another major shift in frontier AI is the rise of agentic systems: models that can plan, make decisions, and use tools to accomplish goals. Instead of producing a single response, an AI agent can retrieve information, run calculations, verify its work, or sequence multiple actions to complete a task.
In education, this transforms AI from a reactive assistant into an active participant in the learning process. An agent can support a student through an extended writing assignment by helping outline ideas, checking sources, revising drafts, and reflecting on feedback over time. For teachers, agents can assist with lesson design, assessment creation, content adaptation, and alignment to standards, reducing preparation burden while maintaining instructional intent.
Many emerging ed tech tools are experimenting with agent-based designs that function as persistent learning companions. These agents retain context across sessions, reason through multi-step tasks, and adapt guidance based on learner actions and explanations. Rather than one-off interactions, they emphasize continuity, supporting learners as they revisit concepts, revise their thinking, and build understanding over time.
This persistence is essential for complex learning goals. Extended problem-solving, inquiry-based science, and project-based learning all benefit from tools that track progress, retain prior work, and dynamically adjust support. Agentic AI makes this possible in ways that static systems cannot.
One applied example is ALTER-Math, developed within the Learning Engineering Virtual Institute (LEVI), which created a multi-agent learning system built on a “learning by teaching” framework. Rather than positioning AI as a tutor, the platform uses agentic design to create teachable agents that students instruct. A coordinated set of agents, including a teachable agent, mentor agent, engagement agent, and knowledge agent, work together to guide interaction, provide formative feedback, track student engagement, and manage instructional flow over multiple turns. This persistent, multi-step orchestration allows the system to support extended reasoning and reflection rather than isolated question-answer exchanges.
Increased autonomy also introduces new risks. Each additional decision point creates opportunities for error: choosing the wrong tool, misinterpreting an instruction, or failing to complete a chain of actions. These failures can undermine trust or misalign support with learning goals.
However, increased autonomy also introduces new risks. Each additional decision point creates opportunities for error: choosing the wrong tool, misinterpreting an instruction, or failing to complete a chain of actions. These failures can undermine trust or misalign support with learning goals.
Designing agentic systems for classrooms, therefore, requires careful constraints, clear role definition, and rigorous evaluation. The most effective agents are not those that do everything independently, but those that know when to prompt, when to step back, and how to remain aligned with pedagogical intent.
From Reading to Seeing and Hearing: Multimodal AI
Learning rarely happens through text alone. Students speak, draw, gesture, and work through problems in ways that reflect diverse strengths and needs. Advances in multimodal AI are beginning to reflect this reality, enabling systems that can interpret and generate across text, audio, images, and video.
For education, the implications are substantial. Multimodal tools can analyze handwritten math work, provide real-time speech feedback for language learners, or interpret diagrams and visual explanations. These capabilities enable more authentic assessment, capturing how learners think and express ideas, not just what they type.
Multimodality also plays a critical role in accessibility. Learners with reading difficulties, disabilities, or language barriers often benefit from alternative modes of interaction. AI systems that can flexibly move between modalities can support more inclusive learning experiences, offering multiple pathways to engage with content and demonstrate understanding.
For teachers, multimodal AI can surface richer signals of engagement and comprehension. Speech patterns, response timing, and visual work can all provide insight into how students are processing material. When used responsibly, these signals can inform instruction and enable more timely, targeted support.
At the same time, multimodal systems raise new challenges. They are resource-intensive to run, and evaluating accuracy, bias, and fairness becomes more complex when multiple input types are involved. Moderation and safety are also more difficult when systems must interpret images or audio alongside text. Voice recognition often degrades when processing less represented accents, dialects, or languages.
As multimodal AI becomes more prevalent, the field will need shared standards and evaluation frameworks to ensure these tools are reliable, equitable, and aligned with educational goals.
The impact of frontier AI in education will depend less on any single model and more on the infrastructure and principles guiding its use. Open datasets, shared evaluation methods, and interoperable systems can reduce duplication and lower barriers to entry.
What This Means for the Field
Across reasoning models, agentic systems, and multimodal AI, a common pattern is emerging. The most promising tools are using advanced AI to support sustained thinking, personalization over time, and more natural interaction between learners, educators, and content.
Looking ahead, the impact of frontier AI in education will depend less on any single model and more on the infrastructure and principles guiding its use. Open datasets, shared evaluation methods, and interoperable systems can reduce duplication and lower barriers to entry. Strong alignment with learning science can ensure that technological advances translate into meaningful educational outcomes.
The future of ed tech will not be defined by whether AI is used, but by how thoughtfully it is integrated. Frontier AI offers powerful new capabilities for reasoning, acting, and perceiving. Realizing their potential will require careful design, public-minded infrastructure, and a clear focus on serving learners. If the field gets that balance right, the next generation of AI could deepen learning rather than simply accelerate it.
Interested in learning more? Check out this webinar, “What’s Next: Advanced AI for Ed Tech,” led by Ralph Abboud, Program Scientist at Renaissance Philanthropy.
