As artificial intelligence (AI) spreads throughout K-12 education, guarding against biased data has never been more important – especially for the diverse student populations who are impacted by AI design but are not always represented by those developing it.
AI learns from past data that human designers select for input; thereby potentially mirroring or amplifying existing stereotypes and biases in classrooms. According to Stanford University’s 2021 AI Index Report, among the new AI PhDs in 2019 who were U.S. residents, 45.6% were white compared to 2.4% were Black – the lowest percentage among all racial groups in the field. Computer Research Association survey data in Stanford’s index report further revealed that the percentage of white (non-Hispanic) new PhDs changed little over the past 10 years, accounting for 62.7% on average. Yet the AI programs that these PhD’s will develop will impact countless students of color in the United States.
The lack of representation in a field that is transforming the education space is notable but steps can be taken to improve diversity and ensure greater equity in education-related AI. Among them are commitments to build a more diverse pipeline of AI developers, establish better frameworks and more representative datasets, and increase outreach efforts that help underrepresented students view AI as a viable career path.
Without such efforts, increased AI in education could unintentionally reinforce existing bias or exacerbate it by cloaking it in perceived objectivity.
Without such efforts, increased AI in education could unintentionally reinforce existing bias or exacerbate it by cloaking it in perceived objectivity.
AI that is diversity-blind – and algorithms that fail to understand or account for cultural and equity differences in everything from large language models, schoolwork, testing, essay grading, or even disciplinary measures – can adversely impact the learning outcomes and academic success of some of the nation’s most vulnerable learners.
“Black students face unconscious bias without technology,” said educator and unconscious bias coach Misty Freeman in a discussion of the topic with the National Education Association. “So having developers of AI that do not further perpetuate that bias is important.”
Build A Framework
It’s human nature to start with what one knows – even when those humans are working in the rapidly-expanding frontier of artificial intelligence.
“Oftentimes tech companies didn’t really seem to understand the experience of Black and Brown students in the classroom,” Nidhi Hebbar, who co-founded the EdTech Equity Project, said in a Hechinger Report review addressing the ways edtech can worsen racial inequalities.
“When tech companies build products for schools, they work with more affluent and predominantly white schools or rely on their own educational experiences. The result? Designers fail to consider or imagine the needs or experiences of under-resourced schools resulting in less consideration for cultural differences in tutoring, tests, essays, and more.”
Some experts note that: “When tech companies build products for schools, they work with more affluent and predominantly white schools or rely on their own educational experiences. The result? Designers fail to consider or imagine the needs or experiences of under-resourced schools resulting in less consideration for cultural differences in tutoring, tests, essays, and more.”
Also underrepresented among AI developers and speaking out in Stanford’s report are Women in Machine Learning (WiML), Black in AI (BAI), and Queer in AI (QAI) – all voices that can help drive AI inclusivity and outputs.
Implementing a framework to welcome new voices into the AI space is key and can be found in non-traditional avenues.
In 2020, Jim Boerkoel, a dedicated computer science professor at Harvey Mudd College in Claremont, California, secured a two-year grant from the National Science Foundation. The grant enabled Boerkoel to begin cultivating a diverse cohort of AI researchers that accurately reflects society. His project, “A Consortium for Cultivating Future Artificial Intelligence Researchers,” introduced a series of initiatives and found that two key factors in fostering diversity in AI are co-hort building and mentoring.
Meanwhile, other avenues of inclusivity are also being explored including edtech competitions that can yield the next generation of learning algorithms and AI program designers.
At The Learning Agency Lab, which routinely collaborates on or runs edtech competitions, intentional efforts are made to ensure dataset inclusivity and principles that promote a diversity-minded process. By doing so organizers can help competitors develop algorithms that promote positive programming and enhance outcomes for all learners.
For instance, The Learning Agency team recently completed the “Robust Algorithms Thorough Essay Rating”, or RATER, competition.
“The goal was to create an algorithm capable of identifying elements in student writing and predicting the effectiveness of arguments presented in their compositions,” said Jules King, program manager with The Learning Agency Lab. Previous competitions on student writing had shown to exhibit bias across certain student demographics but RATER was designed to optimize for algorithms which perform the same across demographic subsets and therefore be more fair and equitable when applied in the real world. Like a ripple in a pond, “the solutions from this competition will then be used for a secondary challenge that will provide the algorithm to organizations that help students with writing. The challenge will include at least four organizations with at least 10,000 users each meaning this algorithm will help thousands of students.”
Being more thoughtful about datasets and welcoming different voices into new spaces can deliver a more comprehensive and collaborative approach to AI design.
Being more thoughtful about datasets and welcoming different voices into new spaces can deliver a more comprehensive and collaborative approach to AI design. The result being truly representative data and frameworks that are more considerate of large institutional structures.
AI By Students – Not Just For Students
A diverse pipeline of AI developers sets the stage for more accurate and equitable edtech to flourish. According to research, building AI literacy early in the K-12 process is key to inspiring future interest in the field not just as AI consumers but as creators of its future iterations. Yet, studies show that many underrepresented students do not view AI or STEM as fields they would be successful in. This is especially true among females.
A 2023 Gallup/Walton Family Foundation survey found that “Gen Z females are nearly 20 points more likely than males to say they are not interested in a STEM career because they don’t think they would be good at it.” The survey further revealed that Gen Z females are exposed to fewer STEM topics in school than males.
Missed opportunities to engage students of color in AI development also pose serious implications not only for educational achievement but for economic advancement as well. A 2023 report by McKinsey & Company found that by 2045, “ generative AI could increase the wealth gap between Black and White households by $43 billion annually.”
Such a gulf need not occur, however. The report’s authors note the positive opportunities AI can bring to all students and communities but highlights that accessibility is essential and “digital literacy must now include generative AI and be widespread across educational curricula for all students.”
As AI continues to transform education, now is the time to cultivate a broad pipeline of diverse developers who can bring their knowledge, insights and multifaceted experiences into the AI ecosystem. In this way edtech leaders can enable truly equitable, accessible, and accurate data, algorithms, and learning programs that benefit every student.