As a researcher who is very involved in artificial intelligence, I think we are at a turning point in the development of AI’s impact on education. Throughout this article, I will refer to “emerging professionals” or “early-career professionals.” These are individuals, like myself, who are in the first decade of their careers in educational technology, data science, and AI research. This cohort of professionals is uniquely positioned between the established leaders who pioneered educational technology and the students currently in our educational systems. The next generation – my generation – must lead by example as AI has the potential to influence not only the direction of technology but also the ethics and values that it will reflect. Since data affects every facet of our lives—from social interactions to education, work, and civic engagement— the next generation of learning engineers must give ethical governance and responsible data practices top priority.
As an AI researcher, I believe that data offers unmatched potential to transform education, strengthen communities, and improve decision-making. For instance, AI systems can provide real-time feedback, offer personalized learning experiences, and use data analytics to uncover new insights. Institutions are making well-informed decisions that can directly enhance educational fairness and student outcomes thanks to this data-centric strategy. These prospects do, however, carry a substantial risk. Through my own study, I have observed how algorithmic biases originating from past injustices can exacerbate and continue prejudice. Misuse of data, particularly in education, can have negative effects on AI systems’ transparency and inclusivity, as well as threaten privacy. Because AI algorithms have the potential to be biased, underprivileged communities may continue to suffer and existing social divides may be reinforced in the absence of adequate oversight.
A Duty To Safeguard Children
As emerging professionals in the field, we must promote risk management that involves both awareness and action. To safeguard individual rights – especially those of vulnerable groups like children, who are increasingly the focus of data collection – responsible governance frameworks must be in place. Additionally, we need to promote data literacy programs to equip young people with the knowledge and skills to critically assess how their data is being used, as well as how it should be protected.
There is a growing disconnect between the amount of data being collected and the level of awareness among early-career professionals regarding its use. Data collection frequently occurs without people fully comprehending its extent or intent. For instance, in a recent study on AI literacy and educational technology implementation, a learning management system collected detailed behavioral data including students’ reading speeds, time spent on specific problems, and even their mouse movement patterns – all without students or parents being aware of this granular tracking. Another example is the use of emotion recognition AI in some virtual classrooms, which analyzes students’ facial expressions during online sessions, often without explicit disclosure of this capability to families. Large volumes of data about children are being collected by AI-driven systems in schools, which track attendance, academic achievement, and even emotional responses. However, how much information does the student or family know about the use of this data?
There is a growing disconnect between the amount of data being collected and the level of awareness among early-career professionals regarding its use. Data collection frequently occurs without people fully comprehending its extent or intent.
As a researcher, I advocate for greater transparency and education. By fostering data literacy in school curricula, we can help early-career professionals become informed participants in this digital landscape. It’s not enough to tell students their data is collected – we need to explain the implications, from privacy concerns to the potential benefits of data-driven insights.
Interactive seminars, case studies, and real-world examples can help get young people talking about data. These programs should allow early-career professionals to critically evaluate how governments and institutions use data, in addition to focusing on protecting their personal data. A more knowledgeable generation will be more able to hold companies responsible use of data and demand ethical practices.
Early-Career Professionals' Role In Data Policies
As early-career professionals in ed tech and AI, we occupy a unique position in the data governance landscape. We represent the first generation of learning engineers and ed tech specialists who have experienced the integration of AI-driven decision-making throughout both our educational and professional journeys. From AI-powered adaptive learning systems in elementary schools to algorithmic college admissions processes, their educational experiences have been uniquely shaped by these technologies. This gives us crucial firsthand insight into both the benefits and potential pitfalls of such an AI-dominated sector.
Unlike previous generations of professionals who had to adapt to these technologies mid-career, emerging ed tech specialists like myself have developed an intuitive understanding of AI’s potential and limitations from our earliest professional experiences. For example, they understand firsthand how AI tutoring systems can both support and sometimes hinder learning, or how social media algorithms can shape their worldview. This native understanding makes them uniquely qualified to help shape policies that balance innovation with ethical considerations.
Unlike previous generations of professionals who had to adapt to these technologies mid-career, emerging ed tech specialists like myself have developed an intuitive understanding of AI's potential and limitations from our earliest professional experiences.
From education systems to social platforms, data governance frequently feels distant and unreachable, yet this is precisely why early-career professionals in the field need to engage. We can serve as liaisons between technologists, legislators, and the communities they work with. I recently participated in a year-long initiative at a large urban school district where we developed guidelines for AI-powered educational tools. Students highlighted privacy concerns about emotion-tracking features that teachers hadn’t considered, while educators emphasized the need for transparency in algorithmic grading systems. The resulting policy framework included mandatory disclosure requirements for AI features and regular student-teacher feedback sessions. This collaborative approach led to higher student engagement and better acceptance of educational technology tools, demonstrating the value of including all stakeholders in policy development. There is more to youth participation in policy conversations than just symbolism; they have valuable perspectives to share, particularly regarding the ethical use of technology.
I encourage my peers to get involved in these conversations, whether through advocacy, research, or direct participation in policy development. When we understand the systems that govern our data, we can influence policies that protect rights and ensure that data and AI technologies are used to advance social good rather than further entrench inequality.
Data Governance And AI
A multifaceted strategy is necessary for responsible AI and data governance. In particular, the administration of AI systems in education must guarantee that these technologies are developed and applied with fairness, transparency, and accountability as their primary goals. As my research has shown, AI is not intrinsically neutral; rather, its effects are contingent upon the ethical frameworks that have informed its creation.
These frameworks can be significantly shaped by the involvement of emerging professionals in ed tech. While digital literacy focuses on understanding and using digital tools effectively, the skills needed for AI interaction go further. Students need to understand not just how to use AI systems, but how to critically evaluate their outputs, recognize potential biases, and actively participate in shaping their development. This includes understanding concepts like algorithmic fairness, data privacy implications, and the ethical considerations unique to AI systems – topics that go beyond traditional digital literacy education.
While digital literacy focuses on understanding and using digital tools effectively, the skills needed for AI interaction go further. Students need to understand not just how to use AI systems, but how to critically evaluate their outputs, recognize potential biases, and actively participate in shaping their development.
Governments enact laws requiring AI systems to pass ethical audits, guarantee algorithmic transparency, and aggressively engage early-career professionals in conversations over governance. Young people may influence the direction of AI and data system development by pushing for a more inclusive and cooperative approach that prioritizes equity, inclusivity, and ethical responsibility.
It is the duty of emerging ed tech leaders to influence the future of data governance and AI ethics. Based on my work as an AI and ed tech researcher, it’s important to support laws that uphold people’s rights while taking advantage of the enormous potential that data presents. By doing this, rising learning engineers and ed tech professionals can contribute to creating a digital future where equity, human dignity, and societal well-being are valued. Together, we can make sure that the next generation of inventors and leaders inherit a world where data and AI are used responsibly and for the greater good.
