Skip to content
The Learning Agency
  • Home
  • About
    • About Us
    • Our Team
    • Our Openings
  • Our Work
    • Services
    • Case Studies
    • Competitions
      • Competition Overview
    • The Learning Exchange
    • Reports and Resources
    • Newsroom
  • The Cutting Ed
  • Home
  • About
    • About Us
    • Our Team
    • Our Openings
  • Our Work
    • Services
    • Case Studies
    • Competitions
      • Competition Overview
    • The Learning Exchange
    • Reports and Resources
    • Newsroom
  • The Cutting Ed
The Learning Agency
  • Home
  • About
    • About Us
    • Our Team
    • Our Openings
  • Our Work
    • Services
    • Case Studies
    • Competitions
      • Competition Overview
    • The Learning Exchange
    • Reports and Resources
    • Newsroom
  • The Cutting Ed
  • Home
  • About
    • About Us
    • Our Team
    • Our Openings
  • Our Work
    • Services
    • Case Studies
    • Competitions
      • Competition Overview
    • The Learning Exchange
    • Reports and Resources
    • Newsroom
  • The Cutting Ed

For Fair Writing Feedback, AI Needs a Teacher’s Touch

The Cutting Ed
  • July 28, 2025
Peter Gault

When AI can’t tell good writing from bad, students suffer. But there’s a fix, and it starts with putting teachers at the center of the algorithm. 

A recent analysis found that when asked to grade student writing, ChatGPT clustered most essays around a C, failing to reward excellence or flag real issues. That kind of “safe” grading may seem neutral, but for students who already fight to be seen, especially those from historically marginalized groups, it reinforces invisibility.

Bias in AI doesn’t have to be inevitable, but fixing it takes serious work. That work must be led by real student work and real educators, not just engineers and algorithms.

Lessons from Six Years of Building AI for Writing Feedback

At Quill.org, a non-profit I founded that provides free literacy tools for writing instruction, we’ve spent six years building our own AI models and learned one key lesson: if we want AI to support real learning, teachers must train it.

When my team and I set out to build an AI tool to support student writing, we weren’t just trying to make feedback faster; we wanted it to be authentic. But early testing made clear just how quickly algorithms can veer off course. 

A student might write a sharp, well-argued paragraph only to be penalized because the AI didn’t recognize a culturally specific reference. Another might receive praise for a rambling response that missed the point entirely. The data told us something simple but urgent: if educators weren’t involved at every step, bias would creep in.

So instead of relying on generic large-language-model output, we build a “thick wrapper” around each prompt: 50–100 authentic student responses paired with teacher-written feedback give the AI a crystal-clear picture of what success and misconception look like, in context.

This approach has led our team to create more than 300 bespoke AI models continually refined through tight human oversight. Ten former teachers on staff manually review 100,000+ student responses each year and run rapid A/B tests to keep the feedback accurate, equitable, and aligned with veteran‑educator standards.

Crucially, a 300‑member Teacher Advisory Council (drawn mainly from Title I schools) co‑designs every prompt and reviews multiple iterations before release, ensuring the guidance reflects real classroom needs rather than abstract algorithms. The result is specific, evidence‑based, and actionable feedback — powerful enough to guide students through several revision cycles in a single session and turn writing practice into meaningful learning.

A student might write a sharp, well-argued paragraph only to be penalized because the AI didn’t recognize a culturally specific reference. Another might receive praise for a rambling response that missed the point entirely. The data told us something simple but urgent: if educators weren’t involved at every step, bias would creep in.

A Four-Part Framework for Reducing Bias in AI Writing Tools

This experience taught us not just how to build better AI for Quill, but how any educational organization can reduce bias and improve instructional quality with the right data practices. 

As a mission‑based organization, we want to lead the way in ensuring AI supports equity, not just efficiency. Our newly released Ethical AI Playbook shares our proprietary process for building AI models that coach more than 11 million students.

The following framework outlines a practical, tool-agnostic approach for using data to reduce bias in AI writing tools.

1. Define success before coding

An engineer’s idea of “good writing” is not a substitute for a learning objective. Before building anything, teams should work with educators to clearly define what success looks like. If a seventh-grade argument essay should include a clear claim and at least two pieces of supporting evidence, that standard should guide every data collection stage, from human scoring rubrics to prompt design. This definition acts as a north star for development, ensuring the AI isn’t guessing what “good” means — it’s following a target set by teachers.

2. Train the model with teacher-scored writing samples

Large language models are trained on internet text, which rarely resembles student prose.  A more reliable approach is to gather actual student essays, have experienced teachers score them using well-defined rubrics, and use that data to train the model. Even a few dozen well‑annotated writing samples can give an AI richer context than thousands of scraped web pages. The goal is to ensure the model’s concept of “excellent,” “adequate,” or “needs work” is teacher‑authored, not internet‑averaged, reducing the risk of biased feedback.

Before building anything, teams should work with educators to clearly define what success looks like. If a seventh-grade argument essay should include a clear claim and at least two pieces of supporting evidence, that standard should guide every data collection stage. This definition acts as a north star for development, ensuring the AI isn't guessing what "good" means — it’s following a target set by teachers.

3. Continually audit for bias

AI that performs well in testing can stumble once real students enter the chat. Before releasing any new prompt or rubric, run the model against a benchmark set of teacher‑graded responses. Compare its judgments to teachers’ and look for patterns: Is it harsher on multilingual writers? Does it over‑reward length? Post‑launch, keep auditing. Bias reduction isn’t a one-time fix; it’s an ongoing process.

4. Keep teachers in the loop

The best safeguards are human. Teacher feedback can surface tone issues, confusing phrasing, or cultural mismatches that algorithms miss. Whether through focus groups or embedded roles, developers should build systems that give educators veto power over the algorithm. 

Why this AI Framework matters

New research is a timely reminder that if we’re not careful, AI tools could reinforce biases and miss the brilliance in some students’ work. But it doesn’t have to play out that way. By defining success from the start, infusing teacher expertise into AI training, rigorously testing for fairness, and continually keeping teachers in the loop, we can build AI that amplifies equitable teaching practices rather than undermining them.

This work doesn’t require cutting-edge models or massive budgets. It requires humility, recognizing that instructional quality lives in classrooms, not codebases. With the right data practices and the right people in the loop, AI in education doesn’t just become more efficient—it becomes more fair.

Peter Gault

Founder and Executive Director, Quill.org

Articles by guest or contributing authors do not necessarily reflect the views of The Learning Agency, our clients, or our funders.

Twitter Linkedin
Previous Post
Next Post

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Contact Us

General Inquiries

info@the-learning-agency.com

Media Inquiries

press@the-learning-agency.com

X-twitter Linkedin

Mailing address

The Learning Agency

700 12th St N.W

Suite 700 PMB 93369

Washington, DC 20002

Stay up-to-date by signing up for our weekly newsletter

© Copyright 2025. The Learning Agency. All Rights Reserved | Privacy Policy

Stay up-to-date by signing up for our weekly newsletter