The adoption gap is real
The pattern is consistent across UK organisations of every size. A senior decision-maker sees the potential in Microsoft 365 Copilot. Licences are purchased, often at significant cost per seat. IT deploys them. An announcement goes out. A few people attend a training session or watch a recorded demo. Adoption is tracked through the Microsoft 365 admin dashboard.
Six weeks later, the dashboard shows that somewhere between 15 and 25 percent of licence holders have used Copilot at all in the past month. The rest have not opened it since the first week. Six months later, the CTO asks whether the organisation is getting value from the investment. The honest answer is: not yet.
The gap between licence ownership and habitual use is the central challenge of Copilot deployment in 2025 and 2026. It is not unique to Copilot, the same pattern appeared with Microsoft Teams, with SharePoint, with every collaboration and productivity tool before it. But the stakes are higher now because the potential productivity gain is demonstrably real, the cost of unused licences is visible on every department budget, and leadership expectations were set high at the point of purchase.
Why the standard rollout approach does not work
The standard approach to software adoption follows a predictable sequence:
- Deploy the tool
- Run a training session (or record one and share a link)
- Send documentation to employees
- Encourage people to experiment
- Track usage data and report back to leadership
This approach fails because it treats adoption as an information problem. The underlying assumption is that people are not using the tool because they do not know how. Provide the information and the problem resolves itself.
But habitual use of a new tool is not an information problem. It is a behaviour change problem. And behaviour change requires different conditions than information transfer.
Someone who leaves a training session knowing how to use Copilot will still not use it the following week in most cases, because:
- They have no specific task to use it on right now
- They have no deadline or external reason to try
- They have no way to tell whether their first attempt produced a good result or a mediocre one
- They have no accountability to anyone if they do not try
The training session addresses none of these things. It creates awareness and initial capability, then stops, a pattern examined in detail in our article on why Copilot training days don't change behaviour. Everything required to turn that capability into a habit is left to chance.
The three root causes
Through the process of building and running a structured Copilot adoption programme inside a real UK organisation, three root causes of failed adoption became clear. They compound each other, and they are each necessary to address.
No structure
Without a specific challenge to attempt on a specific week, people return to the way they already work. This is not laziness (it is how human cognition works. Existing habits are low-effort and reliable. A new tool requires deliberate effort and produces uncertain results. Without a concrete prompt) a named challenge, a real-work scenario, a clear brief, the default is to not use it. The habit never forms because it is never initiated.
No accountability
The people who use Copilot without any programme structure are the people who were already going to. They are intrinsically motivated, curious about technology, and represent roughly five to ten percent of any workforce. The problem is that they were never the problem. The challenge is the other 90 percent, people who are willing to try something new if given a reason, but who will not sustain effort without some form of visibility.
Leaderboards and pod systems create the conditions where participation has a social reward and non-participation has a mild social cost. Neither needs to be dramatic. The effect is sufficient to sustain engagement through the weeks where novelty has worn off but habit has not yet formed.
No progression
Copilot is not a tool you use at one level. The capability gap between drafting a basic email with a simple prompt and building a custom Copilot agent to automate a workflow is significant. Without a deliberate progression path (basic prompting in week one, Excel and data tasks in week four, meeting intelligence in week six, agent creation in week nine) most users plateau at week-one behaviour and remain there permanently.
Worse, early week-one prompts often produce underwhelming results if they are poorly constructed. A user who tries Copilot on an ill-defined task, gets a generic output, and has no feedback mechanism will conclude that Copilot is not useful for their work. This conclusion is wrong, but it is rational given what they experienced. Without a structured path that builds prompt quality progressively, that conclusion sticks.
What the research on habit formation says
The academic literature on behaviour change is consistent on the conditions required for a new behaviour to become automatic. Research synthesised across behavioural psychology and habit formation studies identifies several core requirements:
- Habits form through repetition in a consistent context, not through information or motivation alone
- A new behaviour needs a reliable cue, either an existing behaviour to attach to, or an environmental prompt that recurs at the right moment
- Immediate feedback on whether the behaviour produced a meaningful result accelerates habit formation significantly
- Social accountability (visibility to peers) dramatically increases follow-through, particularly in the middle period of a programme when novelty has faded
- Task difficulty needs to match current skill level. Too easy and engagement drops; too hard and people give up
A standard Copilot rollout satisfies none of these conditions. A structured programme, designed with these requirements explicitly in mind, can satisfy all of them.
What a structured programme looks like in practice
The Copilot Bootcamp programme runs for nine weeks. Each week has one challenge. The challenge is posted to a dedicated Teams channel on Monday. By Friday, participants submit their evidence and receive points toward the leaderboard.
The structure is deliberately minimal. One facilitator manages the programme, not a full-time role, but one to two hours per week. Participants are grouped into pods of four to six people so that accountability is peer-level rather than managerial. Challenges use real work files that participants already have, not sample data or test environments. The leaderboard updates weekly and is visible to all participants.
The challenge progression matters as much as the structure. Week one is basic prompting, short, achievable, immediately useful. By week four the challenges are working with Excel data and building tables. Week six covers Teams meeting summaries and follow-up drafting. Week nine asks participants to design and test a simple Copilot agent for a repetitive task in their own role.
What changes week by week in a well-run programme is observable:
Week one: Most participants attempt the challenge. Results vary. But every participant now has a reference point, they know what Copilot does with a specific prompt in a real context, and they have a score.
Week three: The pod dynamic is established. Participants behind on the leaderboard are trying harder. Participants ahead are sharing prompts informally. The facilitator's midweek nudge email is generating replies.
Week six: Copilot is a topic of conversation in team meetings that have nothing to do with the programme. People are asking each other for prompts. The habit of reaching for Copilot on specific task types is beginning to form.
Week nine: Participants who completed the programme are using Copilot for real work. Not because they were told to, and not in a programme context. Because they tried it nine times across nine different scenarios and built an accurate mental model of where it helps and where it does not.
The self-reported data from the programme that preceded this product showed 94% of participants using Copilot for real work by programme end, with a 64% average increase in self-reported confidence and 37% average time saving. These are self-reported figures and should be treated as indicative rather than controlled. But the direction is unambiguous, and it matches what structured habit formation research would predict.
The Copilot Bootcamp Kit gives one person in your organisation everything needed to run this programme, the challenge pack, the facilitator guide, and the leaderboard tracker. No consultants. No project plan. Set up in a weekend.
Get the kit