Artificial Idea | AI careers · practical prompts · no hype Monday, October 13, 2025 · Issue #21 · Jobs

The upskilling trap

The upskilling trap: why most AI courses won't make you safer at work

Completing a course is not the same as developing a capability. The industry selling you the former would prefer you did not notice the difference.

Since the beginning of 2024, the global market for AI-related professional education has grown at a rate that has surprised even its most optimistic participants. Coursera reported a 210% increase in enrolments for AI and machine learning courses in 2024 relative to 2023. LinkedIn Learning recorded its highest ever single-quarter enrolment figures for technology-related courses in Q1 2025, driven primarily by AI literacy content. Udemy's AI course catalogue grew from 3,400 courses to over 11,000 between January 2024 and September 2025.

The demand is real. The anxiety driving it is understandable. The question that the industry selling into that anxiety has a commercial interest in not asking too loudly is whether the product being purchased is actually solving the problem the buyer thinks it is solving.

The evidence suggests that in a significant proportion of cases, it is not.

What the research says about course-based upskilling

A 2025 longitudinal study by the National Bureau of Economic Research tracked 2,400 professionals who completed AI-related online courses between January 2023 and December 2024, measuring whether course completion produced measurable changes in workplace capability, compensation, or career advancement over the following twelve months.

The headline finding was stark. Course completion alone, defined as finishing a course without applying the content to real work within thirty days, produced no statistically significant improvement in workplace AI capability at the twelve-month mark. Professionals who completed courses but did not apply the content were indistinguishable, on measured capability assessments, from professionals who had not taken the courses at all.

The finding was not that courses are useless. It was that completion without application is useless. The professionals in the study who showed meaningful capability gains at the twelve-month mark were those who applied course content to real work tasks within thirty days of completing the relevant module, used the tools in their actual work environment rather than the sandboxed practice environments provided by the course platform, and encountered and worked through failures rather than completing only the exercises designed to succeed.

That profile describes a minority of course completers. The majority complete the course, receive the certificate, add it to their LinkedIn profile, and return to their existing work patterns without having materially changed how they approach the tasks the course was designed to improve.

Why the certificate is not the capability

The certificate documents that you consumed the content. It says nothing about whether you can apply it. And in the specific context of AI tools, the gap between content consumption and applied capability is larger than in most professional skills domains, for a specific and underappreciated reason.

AI tools are probabilistic and context-dependent in a way that most professional skills are not. A financial modelling course teaches you a technique that works the same way in your spreadsheet as it did in the course exercise. An AI prompting course teaches you principles that produce different results depending on the specific tool, the specific task, the specific context, and the specific way the model has been configured or updated since the course was recorded.

The only way to develop genuine AI capability is to use AI tools on real problems, in your real work context, with real consequences for the quality of the output. The course can orient you and give you a starting framework. The capability comes from the application, not the orientation.

This is not a novel insight. It is the way professional skills have always developed in every domain where expertise matters. Nobody becomes a good negotiator by completing a negotiation course. They become a good negotiator by negotiating, using the course frameworks as a starting point and building on them through experience. The same principle applies to AI fluency, and ignoring it in favour of certificate accumulation is producing a large population of professionals who feel like they have addressed the AI competency question without having actually done so.

What the upskilling industry is optimised for

Understanding why the industry is structured as it is requires understanding what it is optimised to produce. Online course platforms are optimised for completion rates, enrolment volumes, and certificate issuance, because those are the metrics that drive revenue, platform reputation, and the employer partnerships that give certificates their perceived value. They are not optimised for long-term capability development, because long-term capability development is slow, messy, and difficult to attribute to a specific course purchase.

The result is a product design that maximises the likelihood of completion rather than the likelihood of capability transfer. Courses are structured to feel achievable. Exercises are designed to succeed. The content is pitched at a level of abstraction that is accessible to a broad audience but rarely specific enough to be directly applicable to a professional's actual work context. The certificate is awarded for consumption, not for demonstrated capability.

None of this is cynical in the sense of being deliberately misleading. It is the natural consequence of a market where buyers evaluate the product based on how it feels to complete it rather than what it produces twelve months later. The industry is giving buyers what they are asking for. The problem is that what buyers are asking for and what they actually need are not the same thing.

What actually works

The NBER study's findings on what did produce meaningful capability gains are worth examining in detail, because they point toward a specific and actionable alternative to course-based upskilling.

The first variable was task specificity. Professionals who identified a specific, recurring task in their actual work and focused their AI learning entirely on that task outperformed those who completed broad AI literacy courses by a factor of 3.4 on measured capability assessments. Learning one thing deeply, in the context of real work, produced more capability than learning many things shallowly in a course environment.

The second variable was failure tolerance. Professionals who continued experimenting with AI tools after producing poor outputs, analysing what went wrong and adjusting their approach, showed capability gains that compounded over time. Those who stopped experimenting after early failures, which is the majority response, plateaued at a low capability level regardless of how many courses they had completed.

The third variable was social learning. Professionals who discussed their AI usage with colleagues, shared prompts that worked, and learned from others' applications showed faster capability development than those who learned in isolation. This finding aligns with decades of research on workplace learning more broadly: skills developed in social contexts transfer more reliably to new situations than skills developed in isolation.

The fourth variable, and the one with the largest individual effect size, was deliberate reflection. Professionals who spent ten to fifteen minutes per week reviewing what they had tried with AI tools, what had worked, what had not, and what they intended to try next showed capability gains nearly twice as large as those who used the tools at the same frequency without systematic reflection. Reflection is not a substitute for practice. It is the mechanism by which practice becomes learning rather than just repetition.

The India context

The upskilling dynamic in India has specific characteristics that are worth addressing directly. The cultural weight placed on credentials and certifications in Indian professional contexts is higher than in many other markets, driven by a historically credential-dense hiring environment where certificates from recognised platforms carry significant weight in recruitment decisions, particularly at the entry and mid-level.

This creates a tension. The certificate has genuine signalling value in the Indian labour market, more so than in markets where applied portfolio work has already largely supplanted credential-based screening. Dismissing the certificate entirely would be poor advice in a context where it still opens doors. The mistake is treating the certificate as the destination rather than the starting point.

The professionals navigating this most effectively in the Indian context are those who pursue the credential for its signalling value while simultaneously building the applied capability that the credential does not produce on its own. They complete the course, obtain the certificate, and then immediately identify a specific application in their actual work where they will use what they have notionally learned. The certificate gets them in the room. The capability determines what happens once they are there.

The action

Audit your AI upskilling activity over the past twelve months. List every course completed, every certificate obtained, every piece of content consumed. Then answer one question for each item: did this change how I do a specific task in my actual work? If the answer is no, the activity produced a credential, not a capability.

Then identify the single AI application most relevant to your actual work that you are not yet using fluently. Not the most interesting one, not the most advanced one. The most relevant one. Spend the next thirty days applying it to real work, tolerating the early failures, and reflecting weekly on what is working and what is not.

That thirty days will produce more genuine capability development than most twelve-week courses, because it is structured around the variables that the research identifies as actually driving learning: task specificity, failure tolerance, and deliberate reflection.

The certificate is evidence that you started. The capability is what you build after.

Thursday we are giving you the prompt framework for building a personal AI learning plan that is structured around the four variables identified above, specific to your role and industry, and designed to produce measurable capability development rather than a list of completed courses. It takes forty minutes to build and functions as a working document you update rather than a plan you make once and forget.

The difference between a plan you update and a plan you forget is whether it is honest enough to be useful. Thursday's framework is built around that distinction.

— The Artificial Idea team

Keep Reading