Artificial Idea | AI careers · practical prompts · no hype Monday, September 22, 2025 · Issue #15 · Jobs

The new working relationship

The "AI colleague" shift: how to work alongside AI without losing your edge

The professionals pulling ahead are not the ones who use AI the most. They are the ones who have figured out where they end and the tool begins.

There is a framing that has quietly become the dominant one in how organisations talk about AI adoption. AI as a colleague. A tool you work with rather than one you operate. A presence in the workflow that handles certain things so you can focus on others.

It is a useful framing. It is also, if taken too literally, a misleading one. Because a colleague has judgment, stakes, and accountability. They can be wrong in ways that matter and right in ways that surprise you. They develop, adapt, and occasionally push back in ways that change the direction of the work.

An AI tool does none of those things. It produces output calibrated to the inputs it receives, with no stake in whether that output is useful, accurate, or appropriate to the specific context in which it will be used. The professional who treats AI as a genuine collaborative partner, deferring to its outputs and softening their own critical engagement with the results, is making a category error that has measurable consequences for the quality of their work.

The professionals doing this well have a more precise mental model. AI is a highly capable, infinitely patient, occasionally unreliable instrument that amplifies the quality of the thinking directed at it. The amplification is real and substantial. The direction of the thinking remains entirely the human's responsibility. That division of labour, held clearly and applied consistently, is what distinguishes AI-augmented professional output from AI-generated professional output. The difference between those two things is significant, and it is becoming more visible to the people in a position to evaluate it.

What working alongside AI actually looks like

A 2025 study conducted by researchers at Wharton School of Business tracked 776 knowledge workers across four industries over six months, measuring the quality of output produced with and without AI assistance and the working patterns of the highest and lowest performers in each group.

The finding that received the least coverage when the study was published is the most instructive. The highest-performing AI users were not the ones who used AI most frequently. They were the ones who used it most selectively. They had developed a clear and consistent internal framework for which tasks to direct to AI, which tasks to retain entirely, and which tasks to use AI for at the drafting stage while retaining full critical ownership of the final output.

The lowest performers in the AI-assisted group showed a different pattern. They used AI at a higher rate for a broader range of tasks, applied less critical evaluation to the outputs, and over the six-month period showed a measurable decline in the quality of work they produced independently, without AI assistance. The researchers described this as capability displacement: the gradual erosion of a skill through disuse, accelerating in proportion to how completely the skill is outsourced to a tool.

This finding has a direct and practical implication. How you use AI is not just a question of efficiency. It is a question of professional development. The tasks you outsource entirely are the tasks you stop developing. If those tasks are ones where your long-term value depends on maintained and growing capability, the efficiency gain in the short term is being purchased with a capability cost in the medium term.

The framework the best AI users apply

Across the Wharton study and a parallel 2025 analysis by the Harvard Business School of AI adoption patterns among management consultants, a consistent framework emerges in the behaviour of the highest performers. It has three components.

The first is task classification. Before reaching for an AI tool, the highest performers had developed a habit of asking a specific question: is the primary value of this task in the output it produces, or in the thinking required to produce it? Tasks where the value is primarily in the output are strong candidates for AI assistance. Tasks where the value is primarily in the thinking are not, regardless of how much time the thinking takes.

A financial analyst who uses AI to format a standard report has made a sound decision. The formatting is not where the value of their work lives. A financial analyst who uses AI to generate the interpretation of the data in that report has made a different kind of decision, one with implications for whether they are developing or eroding the analytical capability that their career depends on.

The second component is critical distance. The highest performers in both studies maintained a consistent posture of evaluating AI output rather than receiving it. They read outputs looking for what was wrong, incomplete, or contextually inappropriate before acting on what was right. They treated the output as a first draft from a capable but fallible contributor, not as a deliverable from a reliable authority.

This posture is not natural for most people. Human beings are cognitively predisposed to accept well-structured, fluently written information at face value, particularly when it is presented in a format that signals credibility. The professionals who overcame this predisposition did so through deliberate habit formation, building into their workflow specific checkpoints at which AI output was evaluated against their own knowledge and judgment before being used.

The third component is retained practice. The highest performers actively maintained skills in areas where they used AI assistance, regularly completing tasks manually that they could have delegated to a tool. Not out of inefficiency, but out of deliberate investment in the capability that made their AI-assisted work valuable. A writer who occasionally drafts without AI assistance maintains the muscle that makes their AI-assisted drafts worth editing. A writer who never drafts without AI assistance gradually loses the ability to evaluate whether the AI draft is good.

The accountability question

There is a dimension of the AI colleague shift that most professional commentary avoids because it is uncomfortable, but it is too consequential to leave unaddressed.

When AI-assisted work is wrong, the professional who produced it is accountable. Not the tool. Not the organisation that provided access to the tool. The professional whose name is on the output, whose judgment was responsible for evaluating it before it went out, and whose credibility is attached to what it says.

This accountability structure has not changed because AI is involved in the production. It has, in some respects, become more demanding, because the fluency and confidence of AI output makes errors harder to catch and more embarrassing when caught. A handwritten calculation error signals a human working under pressure. A polished, well-structured report containing a factual error that would have been caught by thirty seconds of verification signals something different about the professional who produced it.

The 34% rate at which professionals accepted factually incorrect AI outputs in the Stanford study cited in Issue #9 is not a statistic about AI reliability. It is a statistic about professional verification habits. The AI produced the error. The professional sent it. The accountability sits where it has always sat.

The action

This week, apply the task classification question to your AI usage. For every task you currently use AI to assist with, ask honestly: is the value of this task in the output or in the thinking? Make a list of the tasks where the answer is "the thinking" and where you are currently outsourcing that thinking to a tool.

That list is not an indictment. It is an inventory. What you do with it is a professional decision only you can make, with full awareness of what the research says about what happens to capability that goes unused.

Thursday we are closing out the first phase of this newsletter with the prompt framework that ties together everything covered in Issues #1 through #15: a structured self-assessment that tells you precisely where you stand in the AI transition, what your specific vulnerabilities are, and what a prioritised development plan looks like for your particular combination of role, industry, and career stage.

Fifteen issues in, you have the context to make that assessment meaningful. Thursday gives you the tool to do it rigorously.

— The Artificial Idea team

Keep Reading