There’s a notable twist amid the current debate over ChatGPT: The more that AI races to general intelligence, the more it requires metacognition – both for people and for the machines themselves.
When it comes to technology, researchers are finding that the practice of metacognition, or thinking about thinking, is an important integration, especially when it comes to AI’s advancement in varied but crucial domains. As AI researcher Bonnie Johnson writes, metacognition is becoming a key step “in the evolutionary advancement of AI systems—promoting self-awareness.”
But it’s not just AI that needs to think about thinking. So do its users. Because while the widespread advent of AI can mean everything from greater administrative efficiency to improved social outcomes, it can also exacerbate longtime cognitive challenges like a lack of metacognition. In this sense, people are just as susceptible as AI is to a lack of self-awareness.
As AI researcher Bonnie Johnson writes, metacognition is becoming a key step “in the evolutionary advancement of AI systems—promoting self-awareness.”
So what exactly is metacognition? And how do we get more of it in our schools – and in artificial intelligence systems? Understanding and teaching this critical skill is key, but it’s not something that comes easily.
Metacognition Explained
I’ve written about metacognition before. Simply put, this practice of thinking about thinking helps to flag potential problems and conflicts in reasoning. It’s something that people have done since the dawn of advanced thought.
Writings on metacognition date back at least as far as works by Greek philosopher Aristotle, and the concept was also promoted by philosophers ranging from Plato to John Locke. However, the term ‘metacognition’ wasn’t officially coined until the 1970s when American developmental psychologist John Flavell introduced the term as a result of his research on children’s knowledge.
At its core, metacognition encourages asking questions of ourselves and focuses not on what to think but how to engage in the practice of thinking. Questions like: Why is this true? How do I know this? What am I missing? These queries have several benefits, among them they are what keep people questioning information that might otherwise be packaged and presented as fact.
The challenge is that metacognition is hard. To paraphrase Daniel Kahneman, it requires thinking slow. It’s a practice that requires skill in part because it's an act of self-monitoring, of examining your own thinking.
Metacognition is often the safety belt of learning. If ChatGPT, for instance, says that George Washington was the president of the Soviet Union, it’s metacognition that helps us figure out the error. Metacognition is also key to identifying bias in AI and recognizing that the technology might be discriminating against, say, women.
The challenge is that metacognition is hard. To paraphrase Daniel Kahneman, it requires thinking slow. It’s a practice that requires skill in part because it’s an act of self-monitoring, of examining your own thinking. Indeed, I often recommend that people explain ideas to themselves: Does this make sense? How does this work? Asking these questions is key to learning – and uncovering errors in our thinking or the thinking of others – because it forces a type of observation.
AI Meets Metacognition
As researchers aim to improve AI, they’re trying to build more metacognition into deep learning systems. This has taken different approaches. Some groups are focused on logic and other forms of reasoning, and according to a new National Academies report, some researchers believe that so-called “transformer models” hold a lot of promise when it comes to math reasoning in particular. The rub with this is that many of the most promising approaches of AI to date are probabilistic models, which basically predict the next word in a text. This makes it hard to “bake” logic into these systems.
Others are looking at developing more common sense protocols for AI, and researchers at UCLA have argued that AI needs to develop more “how” and “why” questions. This approach suggests that AI should be capable of policing itself, and that AI needs “self-healing and self-management,” as Johnson writes. This approach seems particularly powerful to help AI reduce bias. It also is key to developing AI that is more safe from internal errors or external attacks. That said, it has obvious limitations. After all, it’s asking AI to police itself.
As society continues on its journey toward an AI-infused future, it is vital that the routine practice of metacognition not only comes along for the ride but is working to safely navigate people away from biased data and logic and toward truly objective information and learning.
In the end, the problem is that metacognition requires robust consciousness both for humans or machines, and we’re far away from both. People often don’t engage in reflective thinking, and despite the hype, AI is also far away from self-awareness.
Thus, at least in the short-term, a key solution is making sure people today are getting enough metacognitive skills practice. To this end, far more should be done to support students at all levels as they think about their thinking.
As society continues on its journey toward an AI-infused future, it is vital that the routine practice of metacognition not only comes along for the ride but is working to safely navigate people away from biased data and logic and toward truly objective information and learning.
This article first appeared on Forbes.com