In recent years, Generative AI (genAI) has transitioned from a peripheral innovation to a core layer in the architecture of modern learning platforms. Large language models (LLMs) are embedded across tools like Khanmigo, Duolingo Max, and Quizlet Q-Chat. In these tools, LLMs are responding to student queries, offering instant feedback, summarizing content, and holding instructional conversations.
The key issue, though, is how to promote true learning through engagement via techniques like feedback and designing AI as a thinking partner.
Many of these features follow a familiar format. A learner types prompts, and the bot responds. A conversation unfolds through short prompts and responses. This chat-based interface has quickly become the default way learners interact with GenAI in educational settings.
The visible interface is only part of the experience. Behind it are layers of design decisions that shape how learning actually happens, choices about what to prompt, how to respond, when to pause, and what to ask next. These decisions are often invisible, but they shape the quality and depth of the learning experience that follows. Much like the base of an iceberg or the roots of a tree, this underlying structure is what holds the experience together.
Now that generative AI exists, tools can be provided with context and it can respond in context, adapt, and enable learning in ways previously unimaginable. The question now is how to design learning experiences that support deeper understanding, invite reflection, and align with how people actually learn.
Consider a simple example. Two students are learning about photosynthesis, using a chat-based AI tool.
The first student begins their lesson on photosynthesis by reading a short AI-generated summary of the process – how plants use sunlight, water, and carbon dioxide to make food. After reading, the chatbot suggests watching a short video that visually demonstrates the steps.
Once the content is consumed, the chatbot presents a few multiple-choice questions. After each response, the AI offers quick feedback:
“Correct.” or “Try again.”
It reinforces recall. The learner experiences binary responses as feedback.
The second student is asked to teach an AI agent named Vidhya everything they understand about photosynthesis. As they explain, Vidhya occasionally interrupts with questions:
“Where does the oxygen go after it’s made?” or “What happens if there is no sunlight?”
The student pauses, rephrases, reconsiders. At the end of the session, it is Vidhya who is tested and the student’s performance is measured by how well their agent has learned.
Same topic, same technology, yet the two students encounter entirely different learning experiences. In the first, the student actions are at the level of recognition and recall, selecting answers and receiving binary feedback. In the second, the student is constructing explanations, anticipating questions, reflecting on cause and effect, engaging in deeper cognitive processes that align with analysis and synthesis.
The distinction lies not in the tool itself, but in the pedagogical choices that shape how the tool is designed and developed, how the learner engages with it, and what kind of thinking it enables.
Feedback is often provided as right/wrong or grows/glows. But when feedback targets the process, it reveals the learner's thinking, not only their score. Process-focused feedback addresses the root cause of learning difficulties rather than just symptoms. By diagnosing where misconceptions occur and providing targeted remediation, it transforms mistakes from failures into learning opportunities.
Designing Beneath The Surface
While technologists continue to expand what AI tools can do like making them faster, smarter, and more responsive, it becomes equally important for learning designers and learning engineers to design the tools and shape the learning experience. Their role is to strengthen the pedagogical roots beneath the interface, ensuring that what feels seamless also supports meaningful learning.
This piece explores what becomes possible when generative AI is shaped with pedagogical intent. Rather than treating AI tools as neutral or one-size-fits-all, it looks at how subtle design choices, often invisible to the user, shape the kind of thinking, interaction, and depth that a learning experience invites.
The following three shifts offer a practical way to approach this work. Each one focuses on a specific design move that reimagines how learners interact with AI: not as passive users, but as active, reflective participants. Together, these shifts offer a lightweight framework to help educators, designers, and technologists ask sharper questions:
What kind of learner thinking can be enabled?
What will the learner actually be doing?
How do learning engineers/designers use genAI’s capabilities to support that?
These are the design decisions that determine whether a learning experience stops at the surface or leads to deeper understanding.
When AI tools adapt not just to what learners struggle with, but to what they care about, they begin to unlock intrinsic motivation. The learning feels worth doing not because it’s easy or gamified, but because it connects to the learner’s world.
1. Designing AI As A Thinking Partner
GenAI chatbots can converse with learners. Instead of converting the chatbot to the role of a tutor why not bank on this affordance of conversation to make it a partner. The chatbots can be a socratic partner and provide questions to think about. This could build critical thinking abilities that benefit students far beyond any single subject area. The chatbot could also nudge the learner to reflect. By prompting reflection rather than providing answers, it develops metacognitive skills that transfer across all learning contexts.
On Khanmigo, when a student submits a math step or reasoning line, the AI agent (Khanmigo) doesn’t rush to correct. Instead, it asks: “What makes you say that?” “Could you explain it in a different way?” “Is that always true?”
These questions are deliberate pedagogical moves that cultivate metacognition. They don’t just tell students what’s right; they help students understand why they think it’s right. In this shift, AI becomes less of a tutor and more of a partner.
An attempt to unpack the genAI (technology) features, learner actions and edtech product features leads us to this table. The idea of this table is to view all the three together as a thinking tool. However as readers become seasoned learning designers this could become a heuristic.
AI can do... | I want the learner to... | So I design... |
---|---|---|
Conversational prompting | Explain thinking, justify responses, explore alternative approaches | Metacognitive development through Socratic partnership |
2. Feedback That Reveals The How
Feedback is often provided as right/wrong or grows/glows. But when feedback targets the process, it reveals the learner’s thinking, not only their score. Process-focused feedback addresses the root cause of learning difficulties rather than just symptoms. By diagnosing where misconceptions occur and providing targeted remediation, it transforms mistakes from failures into learning opportunities.
In Knewton Alta, when a student solves an algebra problem incorrectly, the platform doesn’t just mark it wrong. It diagnoses the step where the misconception occurred and offers a micro-remediation video tied to that specific error pattern.
In the above scenario, the learner is guided back into the problem, with just enough help to move forward.
AI can do... | I want the learner to... | So I design... |
---|---|---|
Analysis of student work | Revisit misconceptions with support | Process-focused learning where mistakes become learning opportunities |
3. Following The Learner’s Curiosity, Not Only Their Learning Level
Most genAI tools today personalize based on pace and accuracy. But what if they could personalize for interest?
In Duolingo Max, for instance, the system adapts lesson contexts to user interests – offering storylines and examples that resonate personally. Similarly, Prodigy Math weaves fantasy narratives into math challenges, keeping learners engaged not through gamification alone, but through immersion.
When AI tools adapt not just to what learners struggle with, but to what they care about, they begin to unlock intrinsic motivation. The learning feels worth doing not because it’s easy or gamified, but because it connects to the learner’s world.
AI can do... | I want the learner to... | So I design... |
---|---|---|
Personalizes content based on learner interests, not just skill level | Engage with personally relevant stories and examples | Intrinsic motivation through meaningful connections |
Conclusion
These offer a glimpse of what becomes possible when generative AI is shaped with pedagogical intent. They reflect a simple, actionable framework for anyone building in this space:
AI can do…
I want the learner to…
So I design…
Learning designers and educators often begin with the learner – what they should be doing, thinking, or exploring. Technologists often begin with the tool – what it can generate, simulate, or adapt. But what turns a functional tool into a meaningful learning experience is when both these perspectives meet.
This is a call for co-design. The starting points might differ, but the destination should be shared: learning that is deep, intentional, and human-centered.
As generative AI becomes core infrastructure in education, the questions will no longer be whether it can answer faster or summarize better. The more urgent question will be: does it support deeper learning? Because when systems are designed around how learners think, reflect, and grow, there is less need to worry about cheating, disengagement, or shallow use. The design itself invites deeper learning.
So don’t just design around what the AI can do. Design the learning experience around the learner. Don’t just build the visible layer. Shape what lies beneath.
When the learner’s experience becomes the design canvas, AI stops being just a chatbot. It becomes a facilitator, a guide, an orchestrator of learning – not just in one-on-one exchanges, but in shared, social contexts. Could it guide a small group of students through a task? Could it observe how a learner moves through a space, listens to a spoken hypothesis, or responds to a drawing?
Perhaps this opens up a wider canvas, the chatbot is just one form. It’s time to imagine others.


1 thought on “Let Learning Lead: Designing with GenAI’s Pedagogical Possibilities”
Wonderful post. Appreciate your insights into thoughtful AI-enabled designs to stimulate deeper learning.