When ChatGPT was released to the public in November 2022, states and school districts were caught flat-footed. For some, their immediate reaction was to ban the groundbreaking tool in school settings, fearing it would be used for cheating and plagiarism. Since then, many states and districts have sought to respond more thoughtfully, establishing AI task forces, consulting resources like the TeachAI Toolkit, and taking their first pass at policies and guidance shaping the use of generative AI in classrooms.
Today, 34 states and Puerto Rico have issued such guidance, and unsurprisingly, they have yet to nail it. So far, state guidance has been heavy on frameworks and glossaries and light on clear, actionable policies and resources. No state guidance has emerged as a clear model to emulate.
The Learning Agency examined the existing state AI and education guidance, and noticed five things states are getting wrong. States can’t be expected to achieve perfection on their first attempt, but hopefully they will continue to refine their guidance as AI advances and school communities navigate this new terrain. Based on our experience in education innovation and public policy, here are the most common mistakes states are making and solutions to address each one.
States aren’t pointing their schools toward proven AI-powered ed tech
Most state AI and education guidance stresses the importance of carefully selecting AI products. Alaska’s guidance encourages districts to “establish a robust vetting process for AI tools.” Guidance from Louisiana and North Carolina, for example, lists AI tools but makes a point of stating they are not endorsing them.
A few states have created resources to support districts’ AI tool selection process. In Massachusetts, they offer action steps districts should take for “procurement and vendor management.” Colorado offers an AI Resource Evaluation Tool, and North Carolina has produced both a GenAI Rubric for School Districts and a set of EdTech Tool Evaluation Criteria. Georgia’s guidance links to TrustEd apps, “a catalog of applications and software vetted for use.”
However, the products are vetted only for security and data privacy, not for evidence of the product’s effectiveness in supporting teaching and learning outcomes. While helpful, this type of guidance still puts the onus on districts to wade through the sea of AI-powered tools – and security and privacy are only a part of the procurement process.
What’s the fix?
States should provide their schools with a Wirecutter for AI-powered educational tools – a guide that examines and recommends products for safety, privacy, and effectiveness. Instead of leaving the vetting up to each school district, states should take on this responsibility as a service to their districts.
According to a December 2025 report from Digital Promise, state-led evaluation of AI tools is critical because national evaluation efforts are lacking and local leaders tend to put more stock in state and local evaluation results.
Plus, this approach will promote efficiency and quality in product usage across the state. Local educational agencies (LEAs) would still have the freedom to choose products based on local needs, but the state would do the heavy lifting of evaluating products and curating lists of recommended tools.
States should provide their schools with a Wirecutter for AI-powered educational tools – a guide that examines and recommends products for safety, privacy, and effectiveness. Instead of leaving the vetting up to each school district, states should take on this responsibility as a service to their districts.
Louisiana has done something similar for instructional materials, engaging in a rigorous vetting process on behalf of their school districts. The Louisiana Department of Education (LDOE) reviews instructional materials for quality and alignment with state content standards, and sorts them into three tiers: Exemplifies Quality, Approaching Quality, and Not Representing Quality. Each local school system gets to select their materials, making an informed decision based on LDOE’s evaluation and their local needs. Several other states, from California to South Carolina, vet instructional materials for their districts, too.
If states can help their LEAs select high-quality instructional materials, they should be able to do the same for AI tools. It will be important to not only evaluate products for safety and privacy, as done by TrustedEd Apps, but also for how effectively they improve student learning outcomes. Vendors make lots of claims about the value of their products, and State Educational Agencies (SEAs) should help their districts make sense of those claims so that the products districts invest in actually pay off for students.
States aren’t ensuring districts have either the training or the technical infrastructure to use AI successfully in classrooms
Through their AI guidance, states are asking a lot of their districts. LEAs are expected to deploy AI effectively to improve student learning outcomes, while protecting student data, mitigating bias, preventing plagiarism, and more. States are right to include these expectations in their guidance; but if districts lack AI-focused professional development (PD) or basic tech infrastructure, they won’t be able to deliver on them.
Inadequate training is frequently cited by educators as a major barrier to using AI safely and effectively in schools. According to a 2025 report from Gallup and the Walton Family Foundation, 68 percent of teachers report having not participated in any training provided by their school or district on how to use AI in their classroom. In fact, the same report reveals that teachers are much more likely to teach themselves how to use AI than to get trained on it by their school or district.
When it comes to deploying AI well in classrooms, tech infrastructure (e.g., updated equipment and devices, fast and reliable internet connectivity, adequate IT staff) is just as important as PD. Yet, this is another area where districts are coming up short. The latest Teaching for Tomorrow study from Gallup and the Walton Family Foundation shows that more than a quarter of all teachers do not have the equipment or materials they need to do their basic jobs, let alone to integrate AI into their teaching practice. Alarmingly, 26 percent of teachers surveyed reported not having enough computers or laptops for their students.
Just as states should vet AI tools for districts, they should vet and make available evidence-based PD that allows districts to use AI well. States should update their AI in education guidance to specify PD providers and training programs staffed by experts in AI and pedagogy, with a track record of equipping educators with the knowledge and skills needed to use AI effectively in classrooms.
What’s the fix?
With resources and targeted funds, SEAs must support high-quality, AI-focused PD and tech upgrades for districts.
Just as states should vet AI tools for districts, they should vet and make available evidence-based PD that allows districts to use AI well. States should update their AI in education guidance to specify PD providers and training programs staffed by experts in AI and pedagogy, with a track record of equipping educators with the knowledge and skills needed to use AI effectively in classrooms. The training these providers offer should be ongoing and focused on how to integrate AI-powered technologies into their instructional practices in meaningful ways, not just how to use specific tools.
In addition to vetting PD for LEAs, SEAs should also make funds available to districts that cannot afford the recommended PD. The application process should be simple, straightforward, and targeted to support the districts with the greatest financial need.
To help close tech infrastructure gaps across the state, SEAs should also make tech modernization funding available to their districts. LEAs lacking in updated equipment, WiFi connectivity, or IT staff could use their funds to bring their infrastructure up to speed and lay a strong foundation for AI deployment.
SEAs lack sufficient AI talent and capacity to meet LEA needs
Experts in AI and education informed the development of each SEA’s AI guidance, but SEAs do not have enough staff and capacity to provide the ongoing support districts need to implement the guidance well. One indication of this capacity issue is that nearly all of the guidance issued so far lacks any mention of a dedicated SEA team working on issues related to AI and education.
One exception is North Carolina, which provides email addresses for the two chief authors of its guidance. Even so, that’s just two people for a state with more than 10,000 schools, and it’s not a team dedicated to supporting the safe and effective integration of AI in schools.
What’s more, SEAs are often stuck using outdated technical tools. To cite one example, New York State’s education agency is still looking to hire people who have experience in COBOL, despite the fact that COBOL is an outdated programming language.
Inevitably, teachers and school leaders will face challenges as their aim to carry out SEA-developed guidance. Judging by the lack of contact information provided in most states’ guidance, it’s not clear SEAs are equipped to offer the sustained support LEAs will require to weave AI responsibly and effectively into their schools.
SEAs must build their own staff capacity dedicated to AI implementation in schools – and ensure those staff are responsive to challenges surfaced by school leaders and practitioners. This must be a sustained effort, as generative AI is rapidly evolving and one-off support is inadequate.
What’s the fix?
SEAs must build their own staff capacity dedicated to AI implementation in schools – and ensure those staff are responsive to challenges surfaced by school leaders and practitioners. This must be a sustained effort, as generative AI is rapidly evolving and one-off support is inadequate.
States are ranking AI as their top priority, and a handful have hired AI specialists. To provide the AI support districts need, SEAs must allocate resources in alignment with their priorities and invest more heavily in AI talent and capacity-building.
Each SEA should have a research team, fluent in AI and education, that invites LEA staff to share challenges related to implementing the state’s AI in education guidance. As Kumar Garg of Renaissance Philanthropy has argued, this team would identify common themes and then engage in research projects exploring solutions to the most intractable problems. Challenges to address might include how to best leverage AI to serve English learners or how to modernize assessment in the age of AI. The team’s findings and recommendations would then be shared with district leaders, school leaders, educators, and school support staff.
As with product and PD vetting, SEAs are better positioned than individual LEAs to provide this kind of support efficiently and at scale. It’s unreasonable to expect every school district – especially in small, rural communities – to be able to hire experts in AI’s integration in schools; but a state department of education could attract and retain this kind of talent.
States aren’t taking a firm stance on how schools should address AI-enabled cheating and plagiarism
While states’ AI guidance generally touches on academic integrity, it puts the responsibility on districts to update their policies accordingly. Many states simply state that policies around AI-related dishonesty should be added to handbooks. They offer little to no guidance on:
- Optimal policies to adopt;
- Specific language to incorporate;
- Concrete ways educators should shift their instructional and assessment practices to minimize the likelihood of AI-enabled academic dishonesty.
In some cases, when states do provide details on how to mitigate plagiarism and cheating, the guidance isn’t practical. For example, North Carolina’s guidance states, “A link to AI chats can be shared on most major LLM platforms and this is a great way for teachers to see a student’s learning process and how the student relied on or partnered with the AI to complete the work.” If students actually had to submit all of their interactions with an LLM for an assignment, this would significantly increase the amount of student work teachers must review. Moreover, this extreme level of scrutiny would not prepare students to use generative AI responsibly beyond their schooling.
States are failing to take a clear, pragmatic stand on how LEAs should treat cheating and plagiarism – and how teaching and learning must change – in the age of AI. While it should be up to districts to set their specific policies, they would benefit from their SEA being more clear, direct, and practical about how to approach this challenge.
SEAs should require districts to define and prohibit AI-enabled academic dishonesty, as well as offer tangible resources to support the implementation of new policies. Just as important, states should help their school systems embrace a shift in instructional and assessment practices in direct response to the emergence of generative AI.
What’s the fix?
SEAs should require districts to define and prohibit AI-enabled academic dishonesty, as well as offer tangible resources to support the implementation of new policies. Just as important, states should help their school systems embrace a shift in instructional and assessment practices in direct response to the emergence of generative AI.
Even though districts typically set their own policies around academic dishonesty, states should require them to update their policies for AI, including clearly defining and banning AI-enabled cheating and plagiarism. SEAs would serve their districts well by not just directing them to update their academic integrity policies, but providing specifics on what those new policies should entail. Utah’s guidance, for example, is clear and straightforward:
Students and staff should not copy from any source, including generative AI, without prior approval and adequate documentation. Students should not submit AI-generated work as their original work. Staff and students will be taught how to properly cite or acknowledge the use of AI where applicable. Teachers will be clear about when and how AI tools may be used to complete assignments and restructure assignments to reduce opportunities for plagiarism by requiring personal context, original arguments, or original data collection. Existing procedures related to potential violations of our Academic Integrity Policy will continue to be applied.
Utah also provides a sample letter that can be tailored and sent to parents, outlining how AI is used in schools and addressing the issue of academic integrity. This resource would be even stronger if it included details on the steps schools will take to curb AI-enabled cheating and plagiarism, as well as each student’s responsibility to produce work honestly. It could even link to a Student AI Code of Conduct (a sample of which can be found in Washington’s guidance).
States must dispel any illusion that teaching and learning can proceed as usual. The emergence of generative AI necessitates new approaches to instruction and assessment to capitalize on the benefits of AI while mitigating the risks they pose. States should encourage assessments and assignments to be designed to avoid reliance on generative AI.
SEAs can help students navigate these new policies successfully by ensuring educators teach and enforce proper citations, disclosures, attributions. States such as Washington, West Virginia, and Maine include in their guidance instructions on how to credit work informed by generative AI. Other states should follow their example.
Finally, states must dispel any illusion that teaching and learning can proceed as usual. The emergence of generative AI necessitates new approaches to instruction and assessment to capitalize on the benefits of AI while mitigating the risks they pose. States should encourage assessments and assignments to be designed to avoid reliance on generative AI.
What would that look like? More in-class writing, presentations, and hands-on projects that come with clear instructions about which, if any, AI tools are acceptable. Delaware’s generative AI guidance, for instance, encourages teachers to “restructure assignments to reduce opportunities for plagiarism” which may include “evaluating the artifact development process rather than just the final artifact and requiring personal context, original arguments, or original data collection.” This kind of pivot not only mitigates academic dishonesty but also opens up fresh ways for students to demonstrate their learning.
States are leaving it to schools to navigate serious and complex issues at the core of AI in education, like protecting the privacy of student data
Existing guidance tends to cover issues of privacy in a cursory way, stating that LEAs must abide by relevant federal and state laws (e.g., FERPA, COPPA). This leaves it up to individual school systems to navigate on their own how AI factors into laws and regulations that are already complex. An SEA “checks the box” by directing its LEAs to follow existing laws. Yet, without offering further support, it’s not setting them up for success.
What’s the fix?
SEAs should provide reader-friendly resources that help teachers, school leaders, and LEA staff understand and abide by existing laws related to student privacy, and make legal support available to school staff who have questions or concerns.
A few states stand out as providing privacy-protecting explainers and resources, though they’re insufficient if not accompanied by legal support. For instance, Maine’s guidance includes video explainers for FERPA, COPPA, CIPA, and IDEA. Guidance from Washington and North Carolina summarizes the purpose of each relevant policy, rather than listing them devoid of context (like in most states’ guidance).
As an example, the North Carolina guidance explains that CIPA (the Children’s Internet Protection Act) “requires schools and libraries that receive federal funds for Internet access or internal connections to adopt and enforce policies to protect minors from harmful content online.” It goes on to explain that “schools must ensure AI content filters align with CIPA protections against harmful content.”
While LEAs have lawyers to help schools navigate the details of this and other laws, it would be ideal to have a state legal team well-versed in AI policy, available to consult with districts, especially those lacking administrative capacity. Samples and templates are particularly important for helping school systems implement state AI guidance.
While LEAs have lawyers to help schools navigate the details of this and other laws, it would be ideal to have a state legal team well-versed in AI policy, available to consult with districts, especially those lacking administrative capacity.
Samples and templates are particularly important for helping school systems implement state AI guidance. Only a few states have created and shared useful templates, and even those states could offer more. Louisiana’s guidance, for instance, links to LDOE’s data sharing agreements and addenda and provides a contact for anyone who has questions about student privacy and the sharing of students’ personally identifiable information. Michigan’s guidance includes an addendum districts can tack onto their Acceptable Use Policy to promote the ethical and responsible use of AI in educational settings, including ensuring data privacy and security. North Carolina links to a sample letter and permission slip teachers can send to parents about how generative AI will be used in class, and the privacy precautions that must be taken.
Conclusion
The initial sets of state AI and education guidance are out. While most represent a thoughtful attempt to help LEAs navigate the dawn of generative AI, all have room to improve. Our recommendations are intended to help states, whether they are working on the next iteration of their guidance or still developing their first, give schools guidance and tangible support that go beyond stating the basics and checking boxes. Districts have an overwhelming list of steps to take in order to seize the promise of AI while minimizing its harms. SEAs must leverage their greater capacity to provide the expertise and resources LEAs need to thrive in an AI-rich world.

