Artificial Idea | AI careers · practical prompts · no hype Monday, December 8, 2025 · Issue #37 · Jobs

The year that changed the question

The 2025 AI job market in review: what got created, what got cut, what surprised everyone

Thirty-six issues. Five months. One argument made in a hundred different ways. Here is what 2025 actually settled — and what it left open.

At the end of any significant year it is worth asking not what happened but what changed. Events are temporary. Changes in the underlying structure of a situation are not. 2025 produced a significant volume of both, and distinguishing between them is the analytical work that makes the year useful rather than just eventful.

Three things changed in 2025 in ways that will not reverse. Understanding each of them clearly is the foundation for making good decisions in the year ahead.

What changed: the baseline expectation

The most consequential structural change of 2025 was not the number of jobs created or eliminated. It was the shift in what professional competence is assumed to include.

In January 2025, AI fluency was a differentiator. Professionals who used AI tools effectively were ahead of their peers because most of their peers were not using them effectively. The gap was real, the premium was growing, and the early adopters were benefiting from a first-mover advantage that had been accumulating since 2023.

By December 2025, in the sectors with the fastest AI adoption rates, that dynamic had partially inverted. AI fluency had moved from differentiator to baseline expectation in large portions of financial services, professional services, technology, and marketing. Professionals who were not using AI tools were no longer behind the curve. They were below the baseline.

The distinction matters. A differentiator produces a premium. A baseline expectation produces a penalty for its absence. The shift from one to the other changes the career calculus in a specific way: the question is no longer whether AI fluency will help you advance. In the sectors where the shift has occurred, it is whether the absence of it will hold you back.

A December 2025 survey of 1,200 hiring managers across financial services, consulting, and technology conducted by the Society for Human Resource Management found that 71% now include AI tool proficiency as a standard evaluation criterion for mid-level professional roles, up from 34% in January 2025. That nine-month shift is one of the fastest movements of a capability from differentiator to baseline in the history of professional skills research. The researchers who produced the survey described it as without precedent in the data they have collected since 1948.

The implication for professionals in those sectors is direct. The investment that produced a premium in January produces compliance in December. The premium now sits further up the capability curve, at the intersection of deep AI fluency and deep domain expertise described in Issue #33. The professionals who made the baseline investment in 2024 and early 2025 are now at the baseline. The ones who invested more deeply are still differentiated. The ones who have not yet started are below it.

What changed: the governance gap became visible

The second structural change of 2025 was the emergence of AI governance as a professional domain with specific, growing, and poorly supplied demand.

The prediction that AI governance would matter was not novel in 2025. The European Union's AI Act had been in development for years. The regulatory intent was clear. What 2025 produced that was not clearly predicted was the speed at which governance requirements moved from regulatory aspiration to operational necessity for organisations deploying AI at scale, and the degree to which existing professional populations were unprepared to meet those requirements.

The EU AI Act's high-risk AI system requirements came into effect in stages through 2025, requiring organisations deploying AI in consequential professional contexts, including hiring, credit assessment, healthcare, and education, to implement specific human oversight, documentation, and audit trail requirements. The requirements were not technically complex. They were organisationally complex, requiring professionals who understood both the technical capabilities and limitations of the AI systems being deployed and the regulatory and ethical frameworks governing their use.

That combination of technical understanding and regulatory and ethical expertise was, and remains, genuinely scarce. The McKinsey talent gap analysis published in November 2025 estimated a shortfall of 340,000 professionals with the relevant combination of skills across the EU alone, with comparable gaps in the United Kingdom, India, and the United States where equivalent frameworks are at different stages of development.

The career opportunity this gap represents is the one this newsletter flagged in Issue #35 as deserving more attention than it has received. It is not an opportunity that requires a technical background. It requires an understanding of how AI systems work at a conceptual level, combined with domain expertise in a sector where AI is being deployed consequentially, combined with the analytical and communication skills to translate between technical and regulatory and organisational stakeholders. That combination is more accessible than most professionals currently appreciate and more valuable than most career advisors have yet noticed.

What changed: the India story got more complicated

The third structural change of 2025 was specific to the Indian professional market and represents the most significant update to the predictions this newsletter was making in August.

The August prediction, grounded in the available data at the time, was that India's technology services sector would experience meaningful displacement at the lower end and growth at the higher end, producing a bifurcation that rewarded professionals positioned at the intersection of domain expertise and AI fluency while contracting opportunity for those in the task categories AI handles most effectively.

That prediction was correct in its direction. It was incomplete in its scope.

What the year revealed that was not fully anticipated is the degree to which Indian technology services companies are themselves becoming significant deployers and developers of AI systems rather than primarily targets of AI displacement. The largest Indian IT services companies, TCS, Infosys, Wipro, HCL Technologies, spent an estimated combined $4.2 billion on AI capability development in 2025, according to their December earnings disclosures. That investment is not defensive. It is strategic. The companies are repositioning from providers of human-delivered services to providers of AI-augmented services, and the professionals they are most aggressively recruiting and developing are those with the domain expertise and AI fluency to design, deliver, and govern those augmented services.

The bifurcation within Indian technology services is therefore not simply between those whose roles are being automated and those whose roles are not. It is between those who are positioned to participate in the sector's strategic repositioning and those who are not. The former group is finding that the transition, while disruptive in its specifics, is producing opportunity at a scale and pace that the purely defensive framing of AI and Indian employment did not capture.

This does not change the reality for the professionals in the roles being most directly displaced. It adds a dimension to the story that the dominant narrative has underweighted, and it has specific implications for the career decisions of Indian professionals at every stage who are deciding where to invest their development attention in 2026.

What 2025 did not settle

Intellectual honesty requires acknowledging what the year left open alongside what it resolved.

The most consequential unsettled question is the pace of the next technology step. The capability improvements in AI systems in 2025 were significant but followed a trajectory that professionals and organisations had some time to adapt to. The question of whether 2026 will produce capability improvements at a similar pace or at a pace that outstrips the adaptive capacity of the institutions and professionals trying to respond is genuinely open. The optimistic scenario is continued gradual improvement that allows adaptation to keep pace. The less optimistic scenario is a capability step that is large enough and fast enough that the adaptive responses built in 2025 are inadequate for what 2026 requires.

This newsletter does not know which scenario will materialise. Neither does anyone else, including the researchers who build the models and the executives who fund them. The appropriate response to this uncertainty is not paralysis and not false confidence. It is the proactive information seeking habit described in Issue #27, applied consistently and translated into action, so that whatever the pace of change in 2026, the professionals reading this are positioned to respond from a place of current understanding rather than reactive catch-up.

The second unsettled question is distributional. The 2025 data on who has benefited from the transition and who has been harmed by it shows a pattern consistent with what economists call skill-biased technological change: the transition has, so far, disproportionately benefited professionals with higher existing skill levels and disproportionately harmed those with lower ones. Whether the governance frameworks, educational institutions, and organisational practices developing in response to this pattern will be sufficient to produce a more equitable distribution of the transition's benefits in 2026 and beyond is a question that 2025 opened rather than answered.

This newsletter is written for professionals navigating the transition on an individual level. The distributional question is a policy and institutional question that individual career decisions cannot resolve. It is worth holding alongside the individual career optimisation this newsletter focuses on, because the world in which the transition produces broadly shared benefits is one worth trying to create, and the professionals best positioned to contribute to that creation are those who understand the transition clearly enough to see beyond their own navigation of it.

The argument this newsletter has made

Thirty-six issues is enough to state the argument plainly.

The AI transition is real, significant, and still early. The professionals who engage with it honestly, invest in the capabilities the evidence identifies as genuinely valuable, and maintain the proactive information seeking habit that keeps their understanding current are the ones whose careers will compound through the transition rather than be disrupted by it. The professionals who wait for clarity that does not come, defer investment until the situation becomes undeniable, or optimise for the wrong signals because they are more visible than the right ones are the ones who will find the gap between their current position and the position the market rewards widening in ways that become progressively harder to close.

This is not a new argument. It is the same argument made in Issue #1, with thirty-five issues of evidence behind it.

The evidence is not exhaustive and the argument is not certain. The future is genuinely uncertain in the ways this issue has described. What the evidence supports is not certainty about what will happen. It is clarity about what is happening now and what the most reliable responses to it look like. That clarity is what this newsletter has been trying to provide since August.

We are taking a brief break over the holiday period. Issue #38 returns on Monday, December 22 with a piece on the specific decisions that matter most in the first ninety days of 2026 for professionals at each career stage. Issue #39, the last Thursday of the year, gives you the prompt toolkit for building a 2026 development plan grounded in what the data says rather than what the predictions suggest.

The new year will bring new data. We will be here to cover it.

— The Artificial Idea team

Keep Reading