AI literacy and the future of thinking: Why smarter tools should help us think, not think less
Date
Nov 7, 2025
Reading time
5min
Author
Liminary
AI tools shouldn’t replace human thought, they should deepen it.
1. The paradox of AI literacy
Something counterintuitive is happening with AI adoption.
According to the Harvard Business Review, “Individuals with lower AI literacy are more likely to embrace AI… Conversely, those with higher AI literacy, who understand the mechanics behind AI, tend to lose interest as the mystique fades.” [1]
That paradox captures a deeper tension in how we’re adapting to this technology. The people most eager to adopt AI are often those who understand it the least, while those who do understand it, approach it with caution.
For society, that asymmetry is risky. And for professionals in high-cognition fields like consultants, researchers, and investors, it raises a hard question: Are our most capable thinkers becoming too comfortable letting AI think for them?
2. When familiarity breeds overconfidence
New research from Microsoft and Carnegie Mellon University found that “higher confidence in GenAI is associated with less critical thinking… GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship.” [2]
In other words, as we grow comfortable using generative AI, we tend to scrutinize outputs rather than engage deeply with ideas. We check if something “sounds right,” not if it is right.
That might sound like a small cognitive shift, but for knowledge workers, it’s enormous. When the model drafts the first version of your strategy memo or research summary, it’s tempting to edit instead of originate. The thinking becomes reactive, not generative.
The danger isn’t laziness, it’s cognitive outsourcing. AI quietly trains a layer of professionals to polish instead of probe, to produce instead of reason.
3. The quiet risk for knowledge workers
This risk is particularly acute in fields built on judgment.
Consultants are rewarded for frameworks that seem original but often start from standard models.
Researchers synthesize literature, an activity AI already mimics shockingly well.
Investors must discern patterns others miss, yet AI summaries can nudge everyone toward the same consensus.
If these tools make outputs look equally polished across a team, it becomes harder to tell who’s actually thinking.
The irony is that the very people who prize intellectual rigor may inadvertently flatten their edge. When every deck, paper, or memo looks brilliant on the surface, insight becomes the scarce resource.
That’s why AI literacy for knowledge workers has to mean more than prompt engineering. It’s about knowing when not to use AI: when to slow down, wrestle with the ambiguity, and think for yourself.
4. The right role for AI: clearing space for thought
Not all the news is grim. Used well, AI can amplify deep thinking rather than replace it.
Studies at Harvard Business School describe the “jagged technological frontier” of AI adoption: when a task sits within AI’s current capabilities, productivity and quality both rise, but when it sits on the frontier, quality can fall dramatically. [3]
That distinction matters for professionals whose work blends analysis, synthesis, and judgment. Tasks like data cleaning, formatting, and drafting summaries are well within AI’s frontier, and offloading them frees up time for higher-order reasoning.
The same dynamic played out in a Boston Consulting Group field study with 480 consultants. Generative AI helped less-experienced consultants perform at the level of more seasoned peers on structured tasks, but not on open-ended, judgment-driven work. [4]
That’s the line we need to manage: AI should clear the cognitive underbrush, not cut down the forest of thought.
For researchers, that might mean using AI to surface relevant studies or structure notes. but still taking the time to evaluate methodology and meaning yourself. For consultants, it might mean using AI to generate hypotheses, not conclusions.
5. Redefining AI literacy
If AI is changing how we think, then AI literacy can’t just mean knowing how to use ChatGPT. It must mean understanding when and why to use it.
We can define AI literacy as the ability to critically, creatively, and responsibly engage with artificial intelligence systems. It’s as much about values and reflection as it is about technical skill. Indeed, it's about metacognition: awareness of how AI shapes your perception and reasoning. (OECD, 2024)
For knowledge professionals, that might look like:
Recognizing when automation changes the question you’re asking.
Knowing when to challenge the model’s framing rather than refine it.
Designing workflows that preserve space for human synthesis.
True literacy, in other words, is not about being faster with AI, but wiser because of it.
6. Building AI that strengthens thinking
If there’s one principle to carry forward, it’s this: the goal isn’t to think less, it’s to think better.
We can build AI that enhances human judgment instead of dulling it. That means designing systems that prompt reflection, not just production.
As UNESCO warns, “The rapid advancements in artificial intelligence (AI) have widened the digital divide, creating what is now known as the AI divide… The most marginalized communities bear the brunt of this divide.” [5]
That divide isn’t only economic, it’s cognitive. Between those who use AI to extend their reasoning, and those who let it replace their reasoning.
At Liminary, that distinction defines our mission. We’re building a knowledge management and recall tool that enables the right piece of your knowledge to find you when you need it, so you can stay in flow and think deeply without interruption.
AI should be a partner in reasoning, not a substitute for it. The future of work won’t belong to those who automate thinking, it will belong to those who cultivate it.
——
Sources:
[1] "Why understanding AI doesn't necessarily lead people to embrace it" by Chiara Longoni, Gil Appel and Stephanie M. Tully. Harvard Business Review, July 11, 2025. https://hbr.org/2025/07/why-understanding-ai-doesnt-necessarily-lead-people-to-embrace-it
[2] "The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers" by Hao-ping (Hank) Lee et al. CHI '25, April 25, 2025. https://dl.acm.org/doi/10.1145/3706598.3713778
[3] "Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity." by Fabrizio Dell'Acqua et al. Harvard Business School Working Paper 24-013. 2023. https://www.hbs.edu/ris/Publication%20Files/24-013_d9b45b68-9e74-42d6-a1c6-c72fb70c7282.pdf
[4] "GenAI Doesn’t Just Increase Productivity. It Expands Capabilities." by Daniel Sack et al. BCG, September 5, 2024. https://www.bcg.com/publications/2024/gen-ai-increases-productivity-and-expands-capabilities?utm_source=chatgpt.com
[5] "AI literacy and the new Digital Divide - A Global Call for Action" by Susan Gonzales. UNESCO, August 6, 2024.https://www.unesco.org/en/articles/ai-literacy-and-new-digital-divide-global-call-action
More from Liminary:
