By 2026, almost every digital team has run an AI pilot. Almost none have a clear answer to the more important question: what actually changed about how the team works?
The trap is that AI experimentation looks productive in isolation. A summarisation flow here, a prompt template there, a dashboard explainer somewhere else. Senior leaders see activity and assume progress. The wider operation, meanwhile, looks more or less the same.
Signs your AI work is stuck in pilot mode
Three patterns turn up almost every time we are called in to review an AI programme that feels like it should be further along.
The first is the showcase pattern. There is a small library of impressive demos. Senior leaders have seen them. Nothing has moved into the workflow that the team uses to do its actual job. Demos accumulate. Operations stay the same.
The second is the orphan tool pattern. A genuinely useful AI capability has been built, but it sits outside the team's existing tools and rituals. Using it requires a deliberate decision and an extra browser tab. Adoption hovers around the early-adopter line and never crosses into habit.
The third is the governance vacuum. The team is using AI in production, but no one can answer who reviewed the prompt, who owns the output, what happens when it is wrong, and which decisions it is allowed to influence. Risk is real. Trust is fragile.
All three resolve the same way: by treating AI as an operating-model change rather than a technology adoption.
Three conditions for AI to actually compound
For AI work to graduate from experiment to capability, three conditions need to be in place. They are not technical. They are organisational.
One. AI lives inside a workflow, not next to one.
A standalone tool that requires a human to remember to open it is a tool nobody opens. The interventions that compound are the ones embedded in the planning sheet, the dashboard or the campaign brief that the team is already using. This is one of the reasons we tend to design bespoke planning, pacing and monitoring tools rather than recommend a category-leader: the AI capability lives inside the tool the team already opens on a Tuesday.
Two. There is a governance answer to who owns the output.
Quietly, the most common reason AI work stalls in marketing is unclear ownership. Without a named owner for the prompt, the output and the downstream decision, errors propagate without being caught. Once governance is real, the team trusts the output enough to use it.
Three. There is a human in the loop on anything that touches a decision.
AI accelerates analysis. It does not replace senior judgement. The teams that get this right keep the loop tight. AI proposes, a senior analyst reviews, the decision goes back into the system. Nobody feels replaced. The loop just runs faster.
The governance shape that actually scales
Most digital teams do not need an AI policy document. They need a small, working governance shape they can apply to every new use-case without slowing it down.
Five questions answered for each AI workflow.
- Owner. Who is accountable for the prompt, the output and the decision it influences?
- Boundary. What can this workflow do, and what must it never do without human approval?
- Review. How does the human in the loop check the output, and what is the cadence?
- Failure mode. What happens when it is wrong, and how is that visible?
- Sunset. When and how do we retire it if it stops earning its place?
Five questions, written down in one paragraph. That is enough governance for almost every AI workflow a marketing team will deploy in the next two years. Anything more becomes the document nobody opens.
What this looks like in practice
A reporting flow that drafts an executive narrative, reviewed by a senior analyst before it lands in the CMO inbox.
A pacing alert that surfaces emerging risk and, with one click, generates the suggested optimisation. The team executes or overrides.
A campaign brief tool that interrogates the brief itself, asks the missing questions and only releases the brief when the answers are in.
In each case, AI is doing the boring work, faster. Senior judgement is still senior, just better informed.
The opportunity map
Most teams do not need an AI strategy. They need an AI opportunity map. Five questions to start with.
- What is the most repetitive, time-consuming task on the team this week?
- What output does leadership read, and how is it produced today?
- Where do quality issues most often slip through?
- What signals are surfaced too late to act on?
- Where do we ask analysts to write the same thing in slightly different words?
Each of these is a candidate for a small, governed AI workflow. None of them require a moonshot. All of them compound. Pairing the opportunity map with deliberate team enablement is what turns a backlog into adopted practice.
What to stop doing
Stop running disconnected pilots that nobody will touch in six months. Stop describing AI as a strategic priority without a workflow it lives inside. Stop assuming an LLM will solve a problem the team did not understand before the LLM existed.
Governed adoption is unglamorous, methodical and durable. It is also the only AI work in marketing that survives the next budget conversation.
Michael Ernst
Founding Partner. Digital Transformation, Growth & AI
Fifteen-plus years of consultative work across global agencies, in-house growth teams and brand-side digital programmes. Connects strategy, technology, performance marketing and AI into operating models that compound.