The Labor of Memory: How Consultants Use AI Today

Date

Reading time

8 minutes

Author

Kevin O'Donnell, Liminary Growth

Research from 15 conversations with fractional executives and independent consultants, conducted April and May 2026.


The Perspective Premium

Fractional executives and independent consultants bill for their perspective. Their value comes from pattern recognition across industries, the ability to diagnose problems quickly, and the credibility to recommend a course of action that clients will follow. This is high-stakes knowledge work. A wrong recommendation costs a client real money and costs the consultant their reputation.

AI should be the perfect tool for this kind of work. And in many ways, it is. The consultants we spoke to are using AI extensively: drafting proposals, synthesizing research, benchmarking competitors, and stress-testing their own thinking. They are among the most sophisticated AI users in any profession.

But something unexpected emerged across these conversations. The most advanced users are hitting a ceiling. The speed gains from AI generation are being consumed by the manual effort of verification, context-loading, and cross-referencing across disconnected tools. The time saved by asking Claude to draft a section is lost when the consultant has to re-upload the same client documents, re-explain their methodology, and then read every sentence to check for hallucinations before sending it to a client.

We call this "the labor of memory": the invisible overhead of manually maintaining context across tools that forget everything between sessions. It is the biggest unrecognized cost in the modern consultant's workflow.


How We Did This Research

Over five weeks in April and May 2026, we conducted 15 structured conversations with fractional executives and independent consultants. The group included fractional CMOs, CTOs, growth advisors, and management consultants. They serve clients across B2B SaaS, healthcare, fintech, education, legal, and insurance. Their AI maturity ranged from early adopters running a single ChatGPT subscription to power users who have built custom skill libraries with 60+ automated workflows.

Each conversation followed a consistent structure: how they organize their work, which AI tools they use and how, where they experience friction, and what would need to change for them to adopt something new. The findings below are drawn from patterns that appeared across multiple conversations, not from isolated anecdotes.


Three Profiles of Expert AI Use

The 15 conversations clustered into three distinct patterns. These are not rigid categories. Most consultants exhibited elements of all three. But each person had a primary mode that defined how they used AI and where they felt the most friction.

1.      The speed-to-deliverable user

This consultant's core challenge is converting raw conversations into billable artifacts. A prospect call ends at 2pm. By 4pm, she needs a scope proposal in their inbox. Her AI workflow is built around this compression: meeting transcript goes in, proposal draft comes out, she refines it, and sends. The pain is in the setup. Every new prospect requires her to re-establish context: upload the transcript, paste in her SOW template, explain her pricing structure, describe her methodology. She does this for every new engagement. None of it carries over.

"I spend more time getting the AI ready than I spend writing," she told us.

2.      The accuracy-first user

This consultant's primary concern is trust. She produces strategic deliverables for clients who will act on her recommendations. A hallucinated statistic or a misattributed framework is not an inconvenience. It is a professional liability. She estimates that 40% of her time is spent reworking AI output because she cannot trust it at face value. Her scepticism is well-founded. In our own testing, standard LLM setups scored as low as 63% on multi-document accuracy tests. She cross-validates every significant claim across multiple AI tools, checking Claude against ChatGPT against Perplexity, looking for consistency.

Her current AI system works, but it is entirely push-based. Nothing happens unless she drives it. She described wanting a system that would surface relevant information without her having to ask for it. "I want it to tell me things I should know, based on what I've already saved. Right now, I have to go looking for everything myself."

What she is describing is a system that works proactively, surfacing relevant context before she asks for it.

3.      The deep synthesizer

This consultant works across four industries simultaneously, running diagnostic audits that involve interviewing dozens of stakeholders, analyzing CRM data, benchmarking growth metrics, and producing deliverables that can run to hundreds of pages across multiple documents. Her most recent project generated a 600-page audit across 15 separate documents for a single client.

She uses three different AI tools: one for research, one for strategy, and one for synthesis. She cross-validates across all three. She is not paying for all of them simultaneously; she switches between free tiers and paid subscriptions depending on which one is performing best that month. She regularly runs out of upload space trying to load all her client materials into whichever tool she is using.

"It's impossible to train all the LLMs for my needs," she said. This single sentence captures the core problem.


Where the Real Friction Lives

The conventional assumption is that AI friction comes from poor output quality: hallucinations, generic writing, wrong answers. That friction is real and it appeared in every conversation. But it is not where consultants lose the most time.

The real friction lives in the transitions.

The cold start between clients

A typical fractional CMO serving three clients dedicates one day per week to each. Monday is Client A. Tuesday is Client B. Every morning begins with a context-loading ritual: opening the right project folder, re-reading last week's notes, re-uploading relevant documents into the AI tool, and re-explaining who the client is and what the engagement covers. One consultant described this as "briefing a new colleague every single morning who has amnesia from the night before."

This is not a minor inconvenience. For someone serving four or five clients, context-switching overhead can consume 30 to 60 minutes per client per day. That is 2 to 5 hours per week spent telling AI tools things they should already know.

The reconciliation loop

When a consultant produces a client-facing deliverable, they cannot afford a single error. This creates a verification cycle that runs in parallel with the drafting process: generate a section, check the claims against the original source documents, correct the errors, generate the next section, check again.

One consultant described her workflow as "writing section by section, feeding it back to Claude, reviewing every line, then giving feedback to iterate." Another takes screenshots of individual PowerPoint slides, pastes them into ChatGPT, asks for refinement, then pastes the result back into the deck. He then uploads the entire PDF and asks the AI to compare specific slides for language consistency.

These workarounds are ingenious. But, they are also a sign that the tools were not designed for this kind of work.

The search problem nobody talks about

Consultants do not search for keywords. They search for context and social metadata. As one person put it, 'I know this person said it. I know it's in my meeting notes, I just don't know what date or which meeting.' They look for 'the insight from that Tuesday meeting with the CMO' or 'the framework I used for the insurance client last quarter' or 'the competitor analysis from three engagements ago.'

No current AI tool handles this kind of retrieval well because it requires connecting information across sessions, across clients, and across time. Instead, consultants resort to what one person called 'folder archaeology': digging through Google Drive, Notion, email attachments, and browser bookmarks trying to reconstruct knowledge they already possess but cannot locate.


The Labor of Memory

Every consultant we spoke to is, to some degree, functioning as the integration layer between their own tools. They copy meeting notes from one app into another. They re-upload documents that the AI forgot overnight. They manually transfer insights from a research tool into a writing tool.

Because they bill for their perspective, they spend significant time rewriting and verifying content to ensure it triggers the right outcomes for a specific client without any risk of misinterpretation. They maintain separate workspaces for each client across multiple platforms, each one slightly out of sync with the others.

This is the labor of memory. It is the cumulative cost of working with tools that have no persistence, no context awareness, and no ability to connect what you learned yesterday to what you are writing today. Part of this labor manifests as what we call the orchestration tax: the overhead of managing multiple disconnected tools.

The cost is invisible because nobody tracks it. It does not appear in any timesheet. It is not a line item in any tool subscription. But it is present in every hour of every day, in the form of tabs opened, documents re-uploaded, prompts re-written, and context re-established.

The most sophisticated users feel this cost most acutely. One consultant has built a library of over 60 custom skills in Claude, organized by task type, with a master reference file governing tone, exclusions, and source quality standards. Each client has a dedicated project with its own instructions and foundational documents. This system works. It is also a full-time maintenance job. He is trying to get Claude to automatically fetch and update files from Google Drive folders. It does not work reliably yet.

His solution is the ceiling of what individual effort can achieve. The question is whether the next generation of tools can deliver the same outcome without requiring the user to build the scaffolding themselves.

Another consultant described the situation more bluntly. She has trained ChatGPT to write in her voice. She cannot transfer that training to Claude or Gemini. When she switches tools (which she does regularly, because each one has different strengths), she starts from zero. Her "voice" stays behind. Her context and her carefully tuned prompts stay behind. Every tool is an island.


What Surprised Us

AI adoption increases the premium on judgment

AI is raising the value of judgment, not replacing it. Every consultant we spoke to treats AI as a capable but unsupervised junior associate. It can research, draft, and structure. It cannot be trusted to face a client. This division of labor is not a limitation. It is a strategic advantage. By delegating the production work to AI, consultants are freeing themselves to spend more time on the thing clients actually pay for: their judgment and domain experience. The consultants who have adopted AI most aggressively are not doing less thinking. They are doing less typing. The premium on perspective is increasing, not decreasing. 

Context switching is not the problem people assume

The consultants we spoke to are experts at context switching. They chose this career because they thrive on variety and can move between industries and clients with minimal friction. The friction is not cognitive. It is mechanical. They can switch their thinking in seconds. Switching their tools takes an hour.

Nearly everyone has moved from ChatGPT to Claude, but most still pay for a second AI subscription

The migration happened in the past 12 months and it was widespread across the group. But almost nobody relies on a single tool. They maintain a second (sometimes third) subscription as a "validator." The reason is trust: when your reputation is on the line, you want a second opinion. This is expensive redundancy that exists because no single tool has earned enough confidence to stand alone.

Hallucination anxiety is getting worse, not better

This was counterintuitive. As AI models improve and hallucination rates decrease, the remaining hallucinations become harder to spot. When errors were frequent, consultants developed a healthy scepticism and checked everything. Now that output quality is generally higher, the temptation to trust it increases. But the stakes have not changed. One undetected hallucination in a client deliverable still carries the same professional risk it always did. Several consultants described a growing unease: the better the AI gets, the more dangerous it becomes to use without verification.

The most advanced users are the least likely to switch tools

One might expect that sophisticated AI users would be eager to try new solutions. The opposite is true. The consultants who have invested the most time building custom workflows, prompt libraries, and client-specific projects are the most resistant to change. Their switching cost is not the subscription fee. It is the hundreds of hours of configuration they would have to rebuild from scratch. They are locked in by their own effort.


What Comes Next

The current generation of AI tools has given consultants a significant productivity advantage over those who do not use them. That advantage is real and growing. But it has created a new category of overhead that did not exist before: managing the tools themselves.

The next shift will not come from smarter models. The models are already good enough for most tasks. The shift will come from the layer underneath: how knowledge is captured, stored, connected, and surfaced across sessions, across clients, and across tools. The consultant who can instantly recall and connect insights from their entire body of work will have a structural advantage over the one who starts every morning re-briefing an amnesiac assistant.

A small number of tools are beginning to address this gap, building persistent memory layers that sit between the user and whatever AI model they prefer. Whether any of them succeed will depend on whether they can solve the cold-start problem: making the tool valuable before the user has invested weeks loading it with context.

The consultants we spoke to are watching. They are interested yet cautious. They have been burned by tools that promise to change their workflow and then require more effort than they save. The bar is high: any new tool must demonstrate value in the first session, or it will join the graveyard of abandoned subscriptions.

The labor of memory is real. The question is who will automate it first.


This research was conducted by Kevin O'Donnell for Liminary, an AI knowledge management company. If you are a consultant or fractional executive and want to share your own AI workflow experience, reach out to us on LinkedIn.