Navigating the complex and evolving landscape of AI in education, the EDSAFE AI Alliance works to shape policies to harness the potential of AI safely and equitably. Coordinated by InnovateEDU, the global initiative brings together a diverse coalition of stakeholders to address the impacts of AI across the education sector. In a 5 Questions interview, Erin Mote, CEO of InnovateEDU and the EDSAFE AI Alliance, discusses the Alliance’s strategic initiatives, its role in shaping policy and promoting AI literacy, and the vision propelling its mission forward.
What is the EDSAFE AI Alliance and what should people know about it?
The EDSAFE AI Alliance, founded in 2020, is a global initiative coordinated by InnovateEDU and powered by a coalition of organizations representing stakeholders across the education sector to provide global leadership for developing a safer, more secure, more equitable, and more trusted AI education ecosystem through a focus on research, policy, and practice. We aim to build and develop an ecosystem that reflects the best practices for AI use in education. By joining forces and complementing rather than competing with stakeholders in the space, we can address one of our time’s most pressing educational policy challenges.
The EDSAFE AI Alliance supports the sector through policy leadership and advocacy centered on the SAFE framework; builds capacity for the sector through a global fellowship made up of 60 diverse leaders from industry, government and ministries, schools districts, and nonprofits, and leads a network of policy labs in an open science approach to developing district level policy.
If you want to understand what is happening at the intersection of AI and education policy, see the latest resources from the Alliance or understand what issues are driving industry in AI and education – you should know about the EDSAFE AI Alliance.
Why is the EDSAFE AI Alliance important?
When the Alliance was formed in 2020, no one could have anticipated the consumer breakthrough of generative AI. Even as someone who has been working in tech policy for over a decade, I find the pace at which generative AI technology is evolving stunning. Its quick evolution and the uncertainty about how AI will be used in everything from elections to job automation lead to fear, which means there is a pull toward regulation and enforcement on a very short time horizon in the policy space. This means it is more critical than ever for the education use case to be at the table when policymakers are developing policy, working on regulation, and considering AI issues. EDSAFE AI has brought together leaders in education and organizations who are “doers” to speak in one voice for the safe, accountable, fair, transparent, and effective use of AI in education.
It is more critical than ever for the education use case to be at the table when policymakers are developing policy, working on regulation, and considering AI issues.
What's been the biggest surprise for you so far in your leadership of the EDSAFE AI Alliance?
The first is the bravery of some district and state leaders working within the policy labs. It’s brave to say, “I am going to work on this incredibly fast-moving issue alongside my school board,” and even braver to share that process with the world. Districts across the policy lab network from Santa Ana to Lynwood have released resources that will accelerate progress, increase dialogue, and be a model for the sector.
The second was the embrace of National AI Literacy Day on April 19. I was shocked by the groundswell of activity at 60 national events helping educators answer the question – “What is AI?”. In addition to events with over a thousand students in Silicon Valley and at the U.S. Department of Education with the National Science Foundation and the White House Office of Science and Technology PolicyP, thousands of educators accessed their first professional development on AI. This is essential right now. Systemic inequity in education manifests in who has access to knowledge and use of AI , so focusing on AI literacy for all communities must be a priority. I’m excited for next year and how much bigger it will be. Mark your calendars for March 28, 2025.
How do you see the landscape of AI in education changing over the next five years, and how will that impact the work of the EDSAFE AI Alliance?
At EDSAFE, our policy agenda, anchored in the SAFE framework, is focused on four critical issues that we believe will become even more critical as AI evolves over the next five years: global coherence in AI policy, the development of public utility infrastructure to support the development of safe AI use in education, support for AI literacy as a foundational literacy, and finally, the need to develop supports for rights impacting technologies. We believe that no matter how much the technology evolves—including what will surely be an increased trajectory towards AGI—building knowledge, not fear, will continue to be a core goal. I imagine the Alliance will grow as more organizations lead in AI and education. We’ll be focused on what infrastructure (technical, human, and policy) the sector needs to work with AI ethically and safely while ensuring that we advocate for access and equity in the technology’s use and application. I am already elated to see the partnerships that EDSAFE AI is forming with our EDSAFE AI Industry Council and how working together to bring industry, research, and user voice (including students and parents) together is accelerating our progress.
We believe that no matter how much the technology evolves—including what will surely be an increased trajectory towards AGI—building knowledge, not fear, will continue to be a core goal.
What else should people know?
Build knowledge, not fear—that is what you can do both as an individual, an organization, and as an educator. There’s currently a lot of noise in AI and education right now. Many people have suddenly become self-proclaimed experts in AI despite only beginning to use this technology a year or 18 months ago. Be cautious of expertise that doesn’t bring together a rich diversity of knowledge, experience, and opinions. There are researchers, industry leaders, and nonprofits who have been doing this work for years, and there is a complexity, particularly here in the United States, with the regulatory undergirding of child privacy, civil rights, and data protection laws, that make it important for you to understand who you are taking advice from. I’ll leave you with the advice I give most district and state leaders who ask me what to do about AI in education and implementing these tools: Good AI implementation is just good edtech implementation. Therefore, fall back on principles of access and equity, data security, data privacy, and data interoperability as you implement them, and you’ll have a solid foundation.