Artificial Idea | AI careers · practical prompts · no hype Monday, December 15, 2025 · Issue #38b · Jobs
The management gap
The "AI colleague" is now your competition, how to stay indispensable
The hardest management challenge of 2025 was not adopting AI tools. It was managing the humans working alongside them. Most managers were not prepared for what that actually required.
In the fourteen months since AI tools became standard in most professional service organisations, a management problem has emerged that the existing body of management research did not anticipate and that most management development programmes are still not addressing. It is not the problem of managing AI tools. It is the problem of managing the human response to working alongside them.
The response is more varied and more complex than most organisations prepared for. Some employees have embraced the tools and are using them to expand their output and their visibility, producing the bifurcation dynamics this newsletter has described throughout the year. Others have developed a complicated relationship with the tools: using them while concealing that use, concerned about being perceived as dependent on them or as producing work that is not genuinely theirs. Others are actively resistant, for reasons ranging from principled concern about AI ethics to straightforward anxiety about what the tools' capabilities imply for their job security.
Managing effectively across all three of those responses, in the same team, simultaneously, while the organisation is also expecting productivity gains from AI adoption, is a genuinely new management challenge. The managers navigating it best in 2025 were not those with the most technical AI knowledge. They were those with the most developed capacity for the specifically human capabilities that the challenge requires.
What the research identifies as the core management challenge
A 2025 study by the Centre for Creative Leadership, covering 2,200 managers across fourteen countries, found that the primary management challenge created by AI adoption in professional teams was not technical. It was psychological safety.
Teams in which employees felt safe to acknowledge when AI tools were producing poor outputs, to admit when they did not know how to use a tool effectively, and to raise concerns about the quality or appropriateness of AI-assisted work showed significantly faster capability development and significantly lower quality problems than teams where those conversations did not happen.
The barrier to those conversations was not the employees' willingness to have them in the abstract. It was the specific signal their managers were sending about what kind of AI-related information was welcome and what kind created risk for the person sharing it. Managers who communicated, explicitly or through their responses to early disclosures, that AI tool struggles were performance evidence rather than development information consistently produced teams that concealed their struggles rather than resolving them.
The concealment is where the quality problems live. An employee who privately knows that an AI output is unreliable but does not feel safe raising it is not a problem the manager can see until after the unreliable output has produced a consequence. The management condition that prevents that concealment is psychological safety, and building it in the specific context of AI adoption requires explicit and repeated signalling that struggle with new tools is development information rather than performance evidence.
The three employee profiles and what each needs
The Centre for Creative Leadership research identified three consistent employee profiles in AI-adopting teams and the specific management responses each profile requires.
The first is the enthusiastic adopter. This employee has embraced the tools, is developing capability quickly, and is producing the productivity gains the organisation is targeting. The management risk with this profile is not underperformance. It is overconfidence: the tendency to apply AI tools to tasks where they are not reliable, to accept outputs without adequate critical evaluation, and to develop a dependency on the tools that erodes the underlying professional capability described in Issue #15. The management response that produces the best outcomes with enthusiastic adopters is structured critical engagement rather than encouragement: asking the right questions about how outputs were produced and evaluated rather than simply rewarding the productivity gains.
The second is the anxious adapter. This employee is using the tools but is not confident in their usage, is concerned about how their AI-assisted work is perceived, and is spending significant cognitive energy managing the social presentation of their AI use rather than developing their capability. This profile is the most common in the research and the one most affected by psychological safety. The management response that produces the best outcomes is explicit normalisation: direct communication that AI tool use is expected, that struggle is normal, and that the quality of judgment applied to AI outputs matters more than whether the outputs came from AI. This communication needs to be specific and repeated rather than delivered once in a team meeting.
The third is the resistant holdout. This employee is not using the tools, for reasons that may include principled objection, anxiety, or simple inertia. The management approach that most commonly fails with this profile is the one most commonly attempted: pressure to adopt, expressed through performance expectations or peer comparison. The Centre for Creative Leadership research found that pressure-based adoption produced the lowest quality AI capability development of any approach studied, because capability developed under pressure is learned as compliance rather than as genuine professional development and is applied accordingly. The approach that produced better outcomes was individual conversation about the specific concerns underlying the resistance, combined with low-stakes opportunities to experiment with tools in contexts where the quality of the output was not immediately consequential.
The specific conversation that changes the dynamic
Across all three profiles, the research identified one management intervention that produced consistently positive results: the explicit capability conversation. Not the performance conversation that addresses AI tool usage as a metric to be improved. The development conversation that asks what the employee is finding difficult, what they are finding useful, and what the manager can do to support the development rather than evaluate its pace.
The distinction between those two conversation types is the one that determines whether the employee experiences AI capability development as a professional growth opportunity or as a performance requirement. The former produces genuine capability development. The latter produces the surface compliance and concealed struggle that is the most common AI adoption failure mode in professional teams.
The action
If you manage people: identify which profile each of your direct reports most closely resembles. Then identify whether your current management behaviour is calibrated to what each profile actually needs or to a generic AI adoption expectation that does not distinguish between them.
The management insight the research offers is not complicated. It is that the same external behaviour, using AI tools, comes from different internal states in different employees, and those different internal states require different management responses. The manager who responds to all three profiles with the same expectation is managing the surface rather than the substance.
Thursday we close out the week with the prompt framework for managers who want to build the psychological safety conditions that the research identifies as the most important variable in team-level AI capability development. It is a different kind of prompt framework from the ones this newsletter usually publishes, because the output it is designed to produce is a conversation rather than a document.
The conversation is harder than the document. It is also more consequential.

