Two and a half years after ChatGPT became a household name, only about half of the U.S. states have policies or guidance on the use of AI in classrooms. As teachers and students continue to grapple with how to use AI safely and effectively, states lacking policies must enact them before another school year goes by. Otherwise, they risk leaving their teachers unprepared to navigate AI and students unprepared for the future.
With the Trump administration’s drive to “return education to the states” and recent closure of the Education Department’s Office of Educational Technology, it’s unlikely federal AI guidance for schools will arrive any time soon. That’s why it’s incumbent on states to provide the guardrails and guidance teachers need to harness AI responsibly.
We can imagine some state leaders find the task of developing AI guidance daunting. AI can be an intimidating topic for those who expertise lies in governing, not technology. But that’s no reason to avoid the task altogether. Students, teachers, and school leaders need state guidance to navigate AI successfully, and they can’t afford to wait.
To give state leaders a nudge, we’d like to offer some advice based on our experience at The Learning Agency, an organization that fosters research-based innovation in education. We don’t believe states are fully responsible for ensuring the safety and effectiveness of AI in classrooms, but they have an important role to play – and if they focus on a few key things, they can help set up their schools to implement AI successfully.
Be Use Case-Specific
Before diving into each recommendation, we want to make two overarching points. First, AI can be deployed in many different ways in educational settings so one broad set of AI policies won’t cut it. States must craft policies specific to how the technology is commonly used. The more tailored the guidance can be to the actual use of AI, whether it’s students using AI for homework or teachers using AI to grade assignments, the more helpful it will be. AI guidance from the Colorado Department of Education, for instance, is use case-specific.
Be Conscious of the End User
Second, AI is basically a fancy word for statistics, and most people lack statistical fluency. Practically speaking, lay people will have a hard time identifying issues like bias in AI algorithms, and they should not be expected to. Guidance, like that from the Oregon Department of Education, that puts the onus on teachers and students to identify bias in AI is short-sighted. State guidance should help teachers and students use AI safely and effectively, and leave bias identification to industry and regulators. (As noted below, if teachers and students are equipped with the skills to identify bias, great. But the responsibility to do so should not rest with them.)
Recommendations for States
As states develop or refine their AI in education policies, they should:
1. Rethink instruction and assessment
Generative AI isn’t going away. A 2024 survey commissioned by the Walton Family Foundation found that 49 percent of K-12 students use ChatGPT at least once a week. When asked about generative AI usage in McKinsey’s 2025 State of AI survey, 71 percent of respondents said their organization uses it regularly.
To start, states should be thinking about rolling out better formative, live assessments. Since cheating using generative AI is challenging to detect, teachers will need to adapt the way they assess student performance. In fact, the plagiarism challenge could open opportunities for richer, deeper assessments. Economist David Deming hopes teachers will make meaningful adjustments, such as requiring students to share what they learned via a class presentation rather than a written paper. Delaware’s AI guidance prompts teachers to restructure assignments in a way that allows them to evaluate the “artifact development process rather than just the final artifact” and to consider “requiring personal context, original arguments, or original data collection.”
With the Trump administration’s drive to “return education to the states” and recent closure of the Education Department’s Office of Educational Technology, it’s unlikely federal AI guidance for schools will arrive any time soon. That’s why it’s incumbent on states to provide the guardrails and guidance teachers need to harness AI responsibly.
Beyond formative assessment, states should also think about ways to reduce the overall burden of assessment. They should encourage teachers to use AI-based tools to more efficiently create tests and quizzes, as well as to provide more frequent feedback on student work, thereby decreasing the frequency of formal assessments. States should also reassess how they approach standardized testing. As generative AI gives rise to more performance-based assignments and assessments, state testing could give way to innovative, competency-based models like PACE in New Hampshire.
In general, states will need to determine what constitutes original student work. The AI guidance put out by the Hawaii Department of Education discourages teachers from relying on AI detection tools, given their limitations. Instead, it encourages students (and teachers) to cite the use of AI tools like ChatGPT and Grammarly; and instructs educators to clarify upfront for their students which uses of AI are acceptable, and which aren’t. Delaware’s guidance elaborates on this idea by articulating three different levels of appropriate AI usage for assignments, which teachers should clarify for their students: permissive (i.e., you can use AI freely for this assignment), moderate (e.g., you can use AI for some but not all elements of this assignment), and restrictive (e.g., AI tools are prohibited for this assignment).
For teachers, there will be a shift in their job description. While the best teachers have always exhibited creativity and adaptability, those traits are now table stakes in the context of generative AI. School leaders will want to hire educators who can respond nimbly to the changing educational landscape, and aren’t afraid to develop their own fluency with AI. The most effective teachers will proactively discuss acceptable AI usage with their students, expand their instructional approach to leverage proven AI tools, and embrace a shift to performance-based assessment. States should encourage districts to seek out and hire teachers who possess these qualities.
2. Help schools and districts select proven AI approaches
With the proliferation of AI-powered ed tech, states should guide their districts to select products with a track record of improving student achievement. In their AI guidance, states should provide examples of tools with strong evidence of improving student outcomes, as well as resources (such as ISTE’s EdTech Index and Common Sense Media’s AI rating system) that help districts make informed decisions. The Louisiana Department of Education recommends some AI tools in their guidance, but the list would be more useful if it only included tools that have demonstrated effectiveness and provided evidence of their success.
The Learning Agency has partnered with Renaissance Philanthropy to build proven AI ed tech tools. Through the Learning Engineering Virtual Institute (LEVI), we are supporting seven teams to reach one moonshot goal: develop an AI-powered tool that doubles the rate of middle school math progress for low-income students. With two years left in the five-year program, LEVI teams are already demonstrating promising results. For instance, students using LEVI-supported PLUS Tutors are nearly doubling their rates of math improvement. Five of the seven teams are delivering their interventions at an annual cost of less than $50 per student. These are the types of effective, affordable AI-powered interventions states should recommend to districts.
In addition to providing districts with product recommendations, states can invest directly in evidence-based AI tools and platforms. Last year, Governor Wes Moore of Maryland announced a $10 million investment, matched by Arnold Ventures, to bring ASSISTments to Maryland students.
In addition to providing districts with product recommendations, states can invest directly in evidence-based AI tools and platforms. Last year, Governor Wes Moore of Maryland announced a $10 million investment, matched by Arnold Ventures, to bring ASSISTments to Maryland students. ASSISTments is an AI-enhanced online tool that has consistently accelerated middle school math gains, especially for low-income students. More states should join Maryland in bringing proven AI tools to their districts.
3. Make data science central to the AI literacy skills taught to educators and students
Data is at the heart of AI, so educators and students must be literate in data and data science to use it effectively. AI models, and the data they are trained on, are imperfect. With data fluency, teachers and students are better equipped to identify AI’s biases and limitations.
How can data science help teachers and students interrogate the accuracy of AI? Let’s say a high school adopts an AI tool capable of grading student essays. A class could use data science to assess whether the AI is grading fairly. Their ambitious and data-savvy teacher could create a dataset of AI-assigned grades, grades for the same essays assigned by experienced teachers, and student demographic data. (Many teachers are not equipped to build such a dataset, which is why philanthropy and government should support the development of publicly available education datasets.) Then, the students could use statistical analysis to see if the AI scores are consistently lower or higher for specific demographic groups. Finally, the teacher could lead the class in a discussion of any bias they uncover.
States should provide teachers and students with regular AI literacy trainings that center data data science. Data Science 4 Everyone offers an excellent set of resources, so states do not have to build from scratch.
Data is at the heart of AI, so educators and students must be literate in data and data science to use it effectively. AI models, and the data they are trained on, are imperfect. With data fluency, teachers and students are better equipped to identify AI’s biases and limitations.
4. Create standards for safeguarding student data, but the responsibility to protect it must be shared by industry
One of the most serious risks states must tackle in their AI policies is the privacy and protection of student data. This is challenging, as it requires an understanding of complex federal privacy laws and rules such as FERPA and COPPA. Broadly, states must put guardrails around the sharing of student data to ensure that it’s safely stored, encrypted, and not sold to third parties. Additionally, states should offer guidance to school districts to help them select AI tools and platforms that safeguard student data. AI guidance from the state departments of education in Washington and Georgia, for example, addresses considerations for selecting products that protect the privacy of student data.
However, the responsibility to protect student data should not solely rest with school districts. Ed tech developers must also know and follow federal (and if applicable, state and local) privacy laws to ensure their products support rather than exploit students. Here is some safeguarding guidance we have shared with developers involved in the Learning Engineering Tools Competition.
The responsibility to protect student data should not solely rest with school districts. Ed tech developers must also know and follow federal (and if applicable, state and local) privacy laws to ensure their products support rather than exploit students.
5. Support State-Level AI R&D
Our final recommendation is for states to support AI in education R&D, however they can. While this recommendation won’t translate into language that can be included in a state’s AI in education handbook, it’s important for the effective implementation of AI in schools. As the Trump administration cuts federal investments in education R&D, states would be wise to make R&D a priority. (Here is a helpful resource from Education Reimagined, Transcend, and the Alliance for Learning Innovation to help states make education R&D a priority.)
States have budget constraints, but to the extent possible, they should support research and development at the intersection of AI and education. This could take many forms, from forging Research Practice Partnerships with local universities to building a K-12, AI-focused version of California’s Learning Lab.
States could also get involved with future LEVI cohorts to test out AI-based tools in their school districts. If you are a state leader who is interested in learning more, please reach out to Ulrich Boser or Tasha Hensley.
States have budget constraints, but to the extent possible, they should support research and development at the intersection of AI and education. This could take many forms, from forging Research Practice Partnerships with local universities to building a K-12, AI-focused version of California’s Learning Lab.
Conclusion
Whether states are ready or not, AI is transforming teaching and learning. The time is now for states to ensure their districts have the guidance they need to deploy AI safely and effectively.
The most useful state guidance will be use case-specific and end-user conscious. It will help districts select proven AI tools and focus AI literacy efforts on data science fluency. It will acknowledge that student plagiarism is a real challenge while pushing teachers to adapt the way they assess student learning. Helpful state guidance will make the safeguarding of student privacy paramount.
In addition to providing AI guidance to their districts, states should make strategic investments in proven AI-based interventions, as well as in the R&D that will result in more AI tools moving the needle on student outcomes.
States aren’t expected to be experts in AI, but they have a responsibility to learn from experts, as well as the states that have already put out guidance. Their school districts want support, resources, and guardrails. Let 2025-26 be the school year when districts all across the country get the support they need to deploy AI responsibly and capably.

Ulrich Boser
