Skip to content
The Learning Agency
  • Home
  • About
    • About Us
    • Our Team
    • Our Openings
  • Our Work
    • Services
    • Case Studies
    • Competitions
      • RATER Competition
    • Reports & Resources
    • Newsroom
  • The Cutting Ed
  • Learning Engineering Hub
The Learning Agency
  • About Us
  • Case Studies
  • Elementor #4332
  • Home
  • Insights
  • Learning Engineering Hub
    • About
    • Introduction To Learning Engineering
    • Key Research
    • Learning Engineering Hub – Academic Programs
    • Learning Engineering Hub – Build
    • Learning Engineering Hub – Contact
    • Learning Engineering Hub – Home
    • Learning Engineering Hub – Key Research
    • Learning Engineering Hub – Learn
  • News & Insights
  • News & Insights Archives
  • Newsroom
  • Our Openings
  • Our Team
  • Privacy Policy
  • Reports and Resources
  • Robust Algorithms for Thorough Essay Rating (RATER)
    • Competition Data
    • Competition Leaderboard
    • Competition Overview
    • Competition Rules
    • Csv Dashboard
    • Submissions
  • Services
    • The Learning Agency’s Educator Insight Panel
  • The Cutting Ed
  • Upload-csv

Navigating the High Costs of AI in EdTech

The Cutting Ed
  • July 3, 2024
Sam Eastes

The introduction of ChatGPT led to a surge in the development of tools and platforms powered by artificial intelligence (AI) in education. Large language models (LLMs) offer exciting opportunities for personalized learning experiences, automated content generation, and scalability. However, the high cost of using these sophisticated models poses a significant financial challenge for edtech developers, particularly startups and smaller companies.

Understanding The Costs

Pricing for LLM APIs, or application programming interfaces, can vary significantly based on the provider, the specific model, and usage levels. For instance, OpenAI’s pricing structure is based on tokens, or units of text processed by an API, with 1,000 tokens equaling approximately 750 words.

As of June 2024, the company’s advanced GPT-4o model (released in May 2024) is offered at $5 per million input tokens and $15 per million output tokens. Surprisingly, this latest model is offered at half the cost of its predecessor, GPT-4, which is priced at $10 per million input tokens and $30 per million output tokens. Still, both models are significantly more expensive than the earlier GPT-3.5 Turbo API, at only $0.50 per million input tokens and $1.50 per million output tokens.

Peter Gault, Founder and Executive Director of the nonprofit education platform Quill.org, provides insight into the practical implications of these costs.

“We had 2.8 million students write 450 sentences this school year. If we were to run all of those sentences through GPT-3.5, it would cost about $240k per year, while with GPT-4, it would cost about $2.4 million per year."

“We had 2.8 million students write 450 sentences this school year,” says Gault. “If we were to run all of those sentences through GPT-3.5, it would cost about $240k per year, while with GPT-4, it would cost about $2.4 million per year (as of June 2024). Our costs are much lower because we build our own AI models and feedback systems, but we are moving to Generative AI over the next year to benefit from the powerful new frontier models.”

While OpenAI is perhaps the best-known and among the most widely used providers of LLM APIs, it is far from the only one. With so many options available, developers need to understand the costs associated with the LLMs they choose. Perpetual Baffour, Research Director of The Learning Agency Lab, suggests using a comparison tool, such as YourGPT’s LLM Cost Calculator, to assist with the decision-making process.

Cost Lowering Strategies

Despite these hurdles, experts in the field are identifying and sharing innovative solutions to make the development of AI-powered education apps more financially manageable. However, the effectiveness of different strategies to reduce edtech AI costs will depend on individual developer needs and may introduce additional challenges that will need careful consideration and management.

Fine-Tuning

A popular recommendation by those in the field is to fine-tune smaller, more cost-efficient AI models and tailor training to specific educational tasks to curb expenses.

“Fine-tuning of small models is very doable for a small cost,” says Christopher Brooks, Associate Professor at the University of Michigan’s School of Information. “Yet there seems to be a lack of knowledge and apprehension in using this approach.”

“Fine-tuning of small models is very doable for a small cost. Yet there seems to be a lack of knowledge and apprehension in using this approach.”

To take advantage of this method, Devan Walton, Assistant Professor of Computer Science at Northern Essex Community College, suggests starting with a powerful model like GPT-4, collecting and tracking high-quality interaction data, and thoroughly cleaning this data with user feedback to ensure it aligns with desired outcomes. This refined dataset can then be used to fine-tune a more compact and cost-effective model. Finally, the application can be switched from the initial GPT-4 to the new, more affordable model.

While opting to utilize and fine-tune smaller models is effective for many, it may not always be a perfect solution.

“In our own testing, we’ve seen huge performance improvements in OpenAI’s GPT-4 and Google’s Gemini 1.5 Pro vs. the older GPT-3.5 and Gemini 1.0 models,” says Gault. “I think there is a real risk that edtech companies may prioritize lower-powered models to account for cost at the expense of AI model accuracy.”

Retrieval-Augmented Generation

In addition to fine-tuning AI models, another popular cost management strategy is employing the use of Retrieval-Augmented Generation (RAG). By integrating external data sources directly into a model’s response process, this approach reduces the burden on the LLM by allowing it to pull in information as needed rather than processing everything internally. Brooks notes that the integration of RAG can optimize expenses, but it also requires careful setup to ensure the retrieval component and the generator work effectively together.

Self-Hosting Models

In certain instances, developers have managed to reduce costs by training and hosting their own models. Self-hosting allows developers to avoid ongoing costs related to API calls to external LLM services, which can accumulate quickly with intensive use. Gault notes that Quill.org has been training its self-hosted AI models for the past five years and has found it to be significantly cheaper than relying on cloud-based services to access pre-trained LLMs.

Self-hosting allows developers to avoid ongoing costs related to API calls to external LLM services, which can accumulate quickly with intensive use.

However, this can be a potentially challenging and time-consuming process with a higher upfront cost. Brooks explains that the risk of this strategy is that it requires the purchasing of hardware or cloud services and ongoing maintenance of the software stack. However, if low latency—meaning near-instant query processing and response—is not required, developers can improve cost efficiency by utilizing idle capacity, such as processing tasks overnight. This method may be particularly suitable for certain educational tasks such as plagiarism detection, article summarization, and content creation.

Edge-Device Deployment

Another cost-saving strategy that has gained traction among developers is the deployment of AI models on edge devices, such as laptops, tablets, and smartphones. This approach enables the immediate processing of suitable queries directly on the device, minimizing reliance on cloud-based servers. Brooks notes that this localized processing not only lowers costs but also bolsters privacy for users.

Brooks also points to the rise of WebGPU APIs, which allow LLMs to be run directly in a browser, as a particular interest to education. 

“While WebGPU has been around for a year or so,” says Brooks, “the big sign I see coming for education here is that it will be turned on in Chrome OS by default – so all Chromebooks. That could be a pretty significant win for the cost of LLMs in education if these small language models turn out to be able to handle most queries a student would have, as once the model is on the system, no further network connection is needed either.”

Increased Field-Wide Collaboration

In addition to the above techniques and strategies, there is a consensus that collaboration among those working in edtech would benefit the field. Such efforts include sharing innovative techniques and approaches, as well as publicly available data and other open-source resources to help developers lower costs, overcome financial constraints, and further advance education through the development and implementation of AI solutions.

Sam Eastes

Program Associate, The Learning Agency

Twitter Linkedin
Previous Post
Next Post

2 thoughts on “Navigating the High Costs of AI in EdTech”

  1. Dana
    07/09/2024 at 9:31 PM

    Meaty, informative article. I didn’t know about the ai cost aspect.

    Reply
  2. Agung Mandala
    12/06/2024 at 9:11 AM

    Hello, I wonder if you also research the cost of absorbing high school materials, let say what would be the cost to train a model specifically with all GCSE subjects. My hunch is the model would be smaller than generic GPT, yet more dependable to do pupil independent work assistance for example.

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Contact Us

General Inquiries

info@the-learning-agency.com

Media Inquiries

press@the-learning-agency.com

Facebook Twitter Linkedin Youtube

Mailing address

The Learning Agency

700 12th St N.W

Suite 700 PMB 93369

Washington, DC 20002

Stay up-to-date by signing up for our weekly newsletter

© Copyright 2025. The Learning Agency. All Rights Reserved | Privacy Policy

Stay up-to-date by signing up for our weekly newsletter