Why the Best AI Tools Don’t Remove All Friction from Knowledge Work

Date

Nov 7, 2025

Reading time

5min

Author

Liminary

The best AI tools for knowledge workers don’t make everything faster. They make the right things faster.

In an era obsessed with optimization, AI promises frictionless productivity: instant summaries, automatic drafts, seamless recall. But experienced researchers, consultants, and analysts know that some forms of friction are essential. The effortful parts of their process, the parts that feel slow, are often where the deepest insights emerge.

In this article, we’ll explore why friction isn’t always the enemy, when AI helps versus when it harms, and what good AI knowledge assistance really looks like for people whose value comes from thinking, not just doing.

The Rise of Frictionless Productivity

Most modern AI and knowledge management tools are built around one central idea: remove friction, increase speed.

That principle makes sense for mechanical or repetitive work. But in knowledge work, where the bottleneck is often cognition, not coordination, friction plays a more complex role.

Research on cognitive offloading offers an important caution. As cognitive scientists put it, “The long-term reliance on automated processes could also lead to cognitive ‘skill decay’ where a developed ability deteriorates over time” [1]. In other words, when we outsource too much of our thinking to external tools, our own mental muscles atrophy.

This doesn’t mean we should reject automation. It means we must become intentional about what we automate. The best AI tools recognize that the goal is not to remove all friction, but to remove the wrong kind of friction.

When AI Helps vs. Hurts Thinking

The now-famous BCG study on consultants using generative AI offers a nuanced picture of AI’s impact on performance.

For 18 realistic consulting tasks “within the frontier of AI capabilities, consultants using AI were significantly more productive…and produced significantly higher quality results (more than 40% higher quality compared to a control group)” [2].

However, when the task was outside AI’s frontier, i.e. where judgment, synthesis, and domain expertise mattered more, “consultants using AI were 19 percentage points less likely to produce correct solutions compared to those without AI” [2].

In other words, AI amplifies human performance when the work is well-suited to automation, but degrades it when the task demands deeper reasoning.

For knowledge workers, this distinction is crucial. AI knowledge assistance should support cognition, not substitute for it.

Mechanical Friction vs. Cognitive Friction

Not all friction is equal.

Mechanical friction refers to tedious, low-value effort: formatting slides, searching for the right file, rewriting a long document for a different audience. Removing this kind of friction frees up attention for the parts of the job that matter.

Cognitive friction, on the other hand, is the productive struggle that underlies deep understanding. It’s what happens when a researcher takes notes by hand, or when a consultant manually reviews past projects to spot patterns.

Learning scientists have long recognized this principle. Desirable difficulties can enhance learning by triggering memory processes that support long-term retention and transfer [3]. When effort is applied to the right kind of challenge, it strengthens comprehension and memory.

That’s why the best AI knowledge management tools aren’t those that automate everything. They’re the ones that distinguish between mechanical and cognitive friction, and preserve the latter.

Why Editing AI Output Often Feels Harder Than Starting Fresh

Many professionals report that editing AI-generated drafts feels more draining than writing from scratch. Research from translation studies explains why.

As Maarit Koponen observed in a study, editing high-quality machine translations can increase translator productivity, but starting with poor machine translation is unproductive [4].

The same holds true for AI-generated text. When an AI model produces mediocre output, the cognitive load of untangling and reworking it outweighs any time saved. Worse, the AI’s framing can subtly shape how we think, reducing our sense of agency and originality.

That’s why experienced knowledge workers often prefer to use AI for structuring, recalling, or summarizing, not for generating complete first drafts of work without guidance from the user about the key points.

Automation Bias and the Attention Problem

There’s another reason why professionals resist over-reliance on AI: the psychology of automation bias.

In classic human factors research, “Automation bias results in making both omission and commission errors when decision aids are imperfect…occurs in both naive and expert participants, [and] cannot be prevented by training or instructions” [5].

In other words, once automation is introduced, people’s attention patterns shift: they stop monitoring critically. As the same researchers note, “Complacency and automation bias represent different manifestations of overlapping automation-induced phenomena, with attention playing a central role” [5].

For high-stakes, high-ambiguity work, like research, investing, or strategic analysis, this is dangerous. It explains why experts instinctively maintain some friction in their process: friction keeps them mentally engaged.

Designing AI Tools That Enhance, Not Erode, Thinking

If the goal isn’t total automation, what should AI knowledge management tools do?

The most valuable tools remove mechanical friction while protecting the conditions for cognition. They handle tasks like retrieval, recall, and organization, so humans can focus on reasoning and judgment.

In practice, this means the best AI tools for knowledge workers should:

  1. Remove mechanical friction. Eliminate tedious, repetitive effort that doesn’t contribute to insight.

  2. Preserve cognitive friction. Keep the user mentally active where understanding and judgment are formed.

  3. Enhance recall. Surface relevant ideas or past work when they’re needed, reducing the cognitive load of memory.

  4. Support pattern recognition. Help users see connections without making the conclusions for them.

The next generation of personal knowledge management tools will succeed not by replacing human thinking, but by structuring and recalling knowledge in ways that extend it.

The Future of Thoughtful AI

As AI becomes embedded in every workflow, knowledge workers face a fundamental design choice: do we want tools that think for us, or tools that think with us?

The first path leads to passivity and over-reliance, not to mention lower-quality work. The second fosters augmentation: AI as a partner that sharpens human judgment rather than dulling it. We'd argue strongly that the industry should move down the second path.

Because in the end, not all friction is a flaw to eliminate. Sometimes friction is the feature that keeps us thinking.

This philosophy underpins a new class of AI knowledge assistance platforms like Liminary, which are designed to recall, connect, and contextualize your own knowledge. Liminary doesn’t try to replace human synthesis. It ensures your ideas and insights resurface at the right time, so you can stay in flow and focus on higher-order thinking.


Want to explore further?

The research that was curated for this blog post - beyond just the references below - can be explored here. Ask any questions to explore the topic yourself!


References

[1] Risko, E. F., & Gilbert, S. J. (2016). Cognitive Offloading: How the Mind Extends Itself. Trends in Cognitive Sciences. https://linkinghub.elsevier.com/retrieve/pii/S1364-6613(16)30098-5

[2] Dell'Acqua, Fabrizio, et al. (2023). Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity. Harvard Business School Working Paper 24-013. https://www.hbs.edu/ris/Publication%20Files/24-013_d9b45b68-9e74-42d6-a1c6-c72fb70c7282.pdf

[3] Bjork, R. A. & Bjork, E. L. (2020). Desirable Difficulties in Theory and Practice. Journal of Applied Research in Memory and Cognition. https://bjorklab.psych.ucla.edu/wp-content/uploads/sites/13/2021/01/RABjorkELBjorkJARMAC2020ForPostingSingleSpaced.pdf

[4] Koponen, M. (2016). Is Machine Translation Post-Editing Worth the Effort? Journal of Specialised Translation, Issue 25. https://jostrans.soap2.ch/issue25/art_koponen.pdf

[5] Parasuraman, R. & Manzey, D. (2010). Complacency and Bias in Human Use of Automation: An Attentional Integration. Human Factors. https://journals.sagepub.com/doi/10.1177/0018720810376055?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%20%200pubmed