← Back to articles

Why Copilot training days don't change behaviour, and what does

One-off training sessions consistently produce 3–5% sustained Copilot adoption. The evidence for why (and what the alternative looks like) has been well understood in behavioural science for over a century.

The training day as default

When an organisation deploys a new technology tool, the default response is to arrange training. For Microsoft 365 Copilot, this typically means a half-day or full-day session, either run internally by IT or L&D, or delivered by a Microsoft partner. The session covers the interface, demonstrates use cases, and gives participants time to try things in a sandbox environment. Attendees leave knowing more than they arrived with. The session is marked complete. Adoption is expected to follow.

It does not follow, or not at the scale expected. This is not a Copilot-specific problem. The same pattern has played out with every major enterprise technology deployment of the past two decades, SharePoint, Teams, Power BI, Dynamics. Training days raise awareness and create initial capability. They do not change behaviour at scale or sustain it over time.

Understanding why this happens (and why it is predictable rather than surprising) requires a brief look at what the science of learning and habit formation actually says.

Ebbinghaus and the forgetting curve

Hermann Ebbinghaus was a 19th-century German psychologist who spent years measuring his own memory of nonsense syllables over time. His findings, published in 1885, established one of the most durable results in cognitive science: the forgetting curve. Without reinforcement, humans forget approximately 50% of newly learned information within 24 hours. Within a week, retention drops to around 10%. After a month, almost nothing remains accessible without a deliberate cue.

The forgetting curve applies directly to training days. A participant who leaves a Copilot training session on Friday has forgotten the majority of the specific techniques demonstrated by the following Wednesday (unless they have had a reason to practice them in the interim. If no such reason existed) no specific task, no prompt, no accountability, the training effectively did not happen from a behaviour change perspective.

Ebbinghaus also identified the solution: spaced repetition. Information rehearsed at increasing intervals is retained dramatically better than information encountered once. A training structure that prompts participants to use specific techniques across nine weekly challenges does not just deliver more training, it delivers training in the format that human memory is actually designed to absorb.

Awareness is not the same as habit

The training day model rests on an implicit assumption: that people are not using Copilot because they do not know how, and that showing them how will fix the problem. This conflates awareness with habit, and the two are entirely different things.

Awareness is knowing that Copilot can summarise a long email thread. Habit is opening Copilot automatically when you see a long email thread, without deciding to do so. The gap between those two states is significant, and training addresses only the first one.

Habit formation research, particularly the work of Phillippa Lally at University College London, identifies the conditions under which a new behaviour becomes automatic. In her 2010 study, Lally found that habit formation took between 18 and 254 days depending on the behaviour and the individual, with a median of around 66 days. Crucially, the process was not simply a function of time: it required consistent repetition of the behaviour in a consistent context, with an immediate cue and a meaningful reward.

A one-off training day creates none of these conditions. It provides no recurring cue, no repetition structure, no accountability mechanism, and no progression of difficulty that keeps the behaviour challenging enough to remain engaging. Left to chance after the training session, most people do not form the habit, not because they are unmotivated, but because the conditions for habit formation were never created.

What 3–5% sustained adoption actually means

Organisations that rely on training days alone typically see 3–5% of licensed users sustain meaningful Copilot use three months after deployment. For guidance on structuring a programme that produces lasting results, see our step-by-step guide to running a Microsoft 365 Copilot pilot programme. These are the people who were already going to use Copilot, the early adopters, the technology enthusiasts, the people who stay curious about new tools without external prompting. They represent roughly one in twenty of any workforce.

This means that for a 100-person organisation with 100 Copilot licences, a training day approach produces roughly 3–5 sustained users. The remaining 95–97 are paying for licences they are not using in any meaningful way. This is not a data point from a single organisation, it is the consistent pattern across the Copilot deployments that preceded structured adoption programmes in the UK market.

The training day feels like it has done its job because attendance was high and the facilitator's feedback was positive. The dashboard tells a different story two months later. By then, the training is a sunk cost and raising the adoption failure feels like reopening a closed chapter.

What sustained adoption actually requires

The conditions for sustained habit formation are well understood. For Copilot specifically, three elements are non-negotiable:

Structure: A weekly challenge creates the recurring cue that habit formation requires. Without a named task to attempt on a specific week, the default behaviour (not using Copilot) has no competition. The challenge does not need to be complex, it needs to be concrete, relevant to real work, and time-bounded.

Accountability: Peer visibility is the most effective low-cost accountability mechanism available in an organisational setting. A leaderboard that updates weekly and is visible to the participant's pod creates mild social pressure that sustains engagement through the weeks when novelty has worn off but habit has not yet formed. This is the period (roughly weeks three to six) where unstructured programmes collapse.

Progression: Difficulty must track skill level. Week-one challenges should be achievable in under 30 minutes and produce an immediately useful result. Week-nine challenges should require genuine application of accumulated skill. Without progression, participants plateau at the level of their first successful use and never develop the depth of capability that drives spontaneous adoption.

A programme designed around these three elements does not just produce higher adoption rates, it produces a different kind of adoption. Users who complete nine structured weeks have an accurate mental model of where Copilot helps and where it does not, built from real experience with real work. That mental model is what drives the spontaneous, unreported use that does not show up in any dashboard but represents the genuine productivity gain. Our article on what good Microsoft 365 Copilot usage actually looks like describes what that behaviour looks like at week nine.

Not sure whether your current approach has the structure to sustain adoption? The free Copilot diagnostic takes five minutes and scores your deployment across structure, accountability, and progression, the three factors that determine whether training translates into habit.

Take the free diagnostic