Last January, just two months after its release, ChatGPT hit 100 million active users, breaking the record for the fastest-growing user base. The unprecedented adoption of AI foreshadowed the tech’s swift infiltration into classrooms nationwide. Since then, the rapid and rampant spread of similar generative AI platforms into classrooms across the country has been perceived as “intensely negative” for many teachers. Little wonder that many teachers and schools have attempted to restrict or outright ban the use of generative AI in their classrooms.
Few things draw criticism faster than a new technology that disrupts the traditional education system for future learners.
Socrates criticized writing, fearing it would weaken memory and lead to superficial learning. In the Middle Ages, scholars bemoaned the “sensual character” of institutions that abandoned Latin as the language of instruction. Parents once rallied against public education itself, convinced it would erode family values. And when calculators were introduced in classrooms, critics fought back, warning that students would lose their ability to perform basic arithmetic.
Few things draw criticism faster than a new technology that disrupts the traditional education system for future learners.
More recently, Wikipedia was satirically labeled as “worse than crack” (and not-so-satirically banned from use in many schools). Meanwhile, the introduction of laptops into classrooms was met with resistance from those fearing they would distract students more than aid them in learning. Today, generative AI is sparking new waves of concern, with critics worried that students will become too reliant on AI, undermining their critical thinking and writing skills, with many schools vainly attempting to ban or restrict access.
Yet, despite the pattern of resistance – and the eventual adoption of new technologies in education – the confusion surrounding generative AI is not without merit. The rapid advancements and potential misuse of these tools raise valid concerns about academic integrity and the development of essential skills.
Banning Or Blanket Labeling GenAI Tools As Cheating Is A Losing Game
Some studies have been able to show the difference between AI-written and human-written texts with surprising accuracy. In fact, The Learning Agency Lab’s Yu Tian along with Scott Crossley, recently received results from a Kaggle competition where the top entrants were able to detect AI-generated essays with 98 percent accuracy. Other researchers have been able to detect AI-generated text at similar rates.
While detection methods have shown promising accuracy in distinguishing AI-generated text from human writing, they don’t address the reality of how students are using these tools. Most students won’t simply submit an essay fully written by AI; instead, they use generative AI to refine their ideas, punch up a paragraph, or enhance the clarity of their work. This makes the focus on AI detection in writing assignments somewhat narrow and out of step with the broader ways AI is already being integrated into student learning.
Beyond writing, generative AI is already being used by more than half of students between ages 14 and 22 to assist in problem-solving, conduct research, and develop study strategies across a variety of subjects. As AI becomes more integrated into daily tasks, restricting its use in classrooms creates a disconnect between educational practices and real-world applications. The push to ban these tools assumes that students are solely using them to cut corners, when in fact, many are employing AI in ways that enhance their understanding and creativity. Instead of attempting to play a game of constant monitoring, schools should focus on teaching students how to use AI as a legitimate learning tool.
The push to ban these tools assumes that students are solely using them to cut corners, when in fact, many are employing AI in ways that enhance their understanding and creativity.
Moreover, the cat-and-mouse game of trying to detect AI use is a resource-heavy approach that risks villainizing students. Spending time and effort deciphering detection algorithms and enforcing AI bans diverts attention from more constructive educational goals. Worse, it fosters an environment of suspicion and punishment rather than one of curiosity and innovation. Instead of framing the use of AI as cheating, educators could reimagine assessments and learning objectives to better align with a future where AI tools are a common part of both academic and professional life.
Failure To Explicitly Teach How To Use These Tools May Widen Achievement Gaps
Failure to explicitly teach students how to use AI tools in a variety of domains may widen achievement gaps, as students with access to better resources and guidance outside of school could gain an advantage over their peers. This is already clearly present in professional settings, where AI is most likely to be used by highly-educated, white, male employees making more than $100,000 per year.
While wealthier students may have the opportunity to experiment with AI tools in extracurricular settings or with private tutors, less privileged students may not receive the same exposure. This creates a digital divide where students with more resources are better prepared to leverage AI for learning, research, and problem-solving, while others fall behind.
Generative AI has the potential to close achievement gaps, but only if it is integrated thoughtfully into a high-quality curriculum.
Generative AI has the potential to close achievement gaps, but only if it is integrated thoughtfully into a high-quality curriculum. Without proper instruction on how to use these technologies, students are left to navigate them on their own, leading to uneven outcomes. For instance, students who are taught to use AI for refining essays or brainstorming project ideas may perform better in assessments, while those without that guidance may struggle to compete. This divide could deepen as AI continues to evolve and become more entrenched in educational and professional settings.
To mitigate this risk, schools need to ensure that AI instruction is equitable, providing both teachers and students with the skills and knowledge necessary to benefit from these tools. Preliminary research shows the potential for AI tools to narrow achievement gaps, ensuring that all participants – not just those with access to external resources – are prepared to thrive in an AI-driven future.
Treat GenAI The Way You Treat Google Search
One way to approach generative AI in the classroom is to treat it like a Google search. Like Google, AI tools are accessible to everyone, but students are more likely to engage critically with them in structured classroom settings rather than being left to navigate them independently – or, worse, attempting to use them covertly.
My humble pitch to educators is straightforward: treat generative AI the way you treat Google search. While there are exceptions – much like the rule “i before e except after c” – this simple heuristic provides a practical framework for navigating the evolving landscape of AI-driven education. I’ve taught college-level English courses for nearly nine years, and I suggest these potential guardrails for thinking about integrating AI in any classroom:
- Rarely ask students to submit their Google searches, and rarely ask students to submit their prompts.
- Rarely ban Google search, and rarely ban all GenAI tools.
- Rarely expect students to use only one source from a Google search, and rarely expect them to rely on one AI-generated output without refinement.
- Rarely give assignments that can be completed by copying Google search snippets, and rarely design tasks that can be solved by simple AI outputs.
- Rarely penalize students for Googling something during their research process, and rarely penalize them for consulting AI to enhance their drafts.
- Rarely assume that a student who used Google search failed to engage critically, and rarely assume that a student who used AI avoided deep thought.
Each of these rules has exceptions, for Google Search and GenAI. For example, when teaching students to search academic databases, I do ban Google Search for an assignment. And, when I ask students to conduct a systematic topic review, I do ask them to turn in their search queries. For the most part, however, if my assignment is bland enough that an LLM-generated answer can pass muster, then I’m willing to accept responsibility for that and develop a more rigorous prompt the next time. “This feels generated” is a nudge to personalize, not penalize.
LLMs, like search engines, are here to stay. Rather than seeing these tools as a threat, educators can guide students to use them effectively. This involves teaching students to ask better questions, identify bias, and critically evaluate AI-generated content, much like they do with search results.
LLMs, like search engines, are here to stay. Rather than seeing these tools as a threat, educators can guide students to use them effectively.
Ultimately, AI should enhance, not replace, human thinking and creativity. The challenge for educators is to integrate it in ways that foster curiosity, critical thinking, and ethical use. By embracing AI, schools can better prepare students for the future, equipping them to engage thoughtfully with the world. Just as Google search has become an indispensable tool, generative AI may soon serve a similar role in expanding knowledge. It’s time we teach all students how to use it well.
Jason Godfrey, PhD, directs data science at The National Collaborative for Accelerated Learning. With a PhD from Michigan and a Harvard fellowship, his research spans educational equity, AI in learning, and writing assessment. His recent work appears in the Journal of Writing Assessment, College Composition and Communication, and MLA Profession.
1 thought on “Treat GenAI Like You Treat Google Search”
Hi Jason, as someone interested in this space it was very refreshing to read your take! I completely agree that labelling that GenAI Tools As Cheating is a loosing game, where for the sake of instituitional efficency, the victim is the student. For me personally, the biggest drawback of this probablistic approach is how it sort of also penalizes or demotes improvements in student writing – since it does create a fear of interacting with their own paper so as not to trigger the detection system.
As someone born in the third world, the widening achievement gap also hit home for me as well unfortunately. Forget AI, even when the Web became a thing it was purported to be a boundary-less version of the world where it would be a ‘equalizer’ of sorts – that dream never came to be. This again happened with crypto as well, where the third world got access to it much later and even setting up mining rigs was near impossible since there is sometimes not even enough electricity for personal consumption. All of this to say, that the achievement gap problems perhaps isn’t unique to genAI/LLMs – its just an economics and social problem- which technologies just multiply since technologies effectively are multipliers: the good become better, and the bad become worse.
I do have a slight disagreement though about point 1 which is “Rarely ask students to submit their Google searches, and rarely ask students to submit their prompts. “. In my opinion, prompts are more explorative than searches, a current weakness we have in today’s world is that we grade a students and not their process. And, since prompts elaborate more on the process – wouldn’t it be a good supplmental source? Just wondering what your rebuttal to that will be.