Artificial Idea | AI careers · practical prompts · no hype Monday, October 20, 2025 · Issue #23 · Jobs

The hidden advantage

AI is screening your resume before a human sees it — here's how to pass

The professionals who built careers in resource-constrained environments did not develop a handicap. They developed a capability set that the current labour market is undervaluing and that AI is about to make significantly more visible.

There is a conversation that happens in professional circles in India, and in every other high-growth market where ambitious professionals have built careers without the structural advantages available to their counterparts in more mature economies, that goes something like this. The tools available here are not as good. The networks are thinner. The institutional support is weaker. The learning curve is steeper because the resources to climb it are fewer. The implicit conclusion is that professionals from these contexts are working at a disadvantage relative to those who had better access to better resources from the beginning.

That conclusion deserves to be examined rather than accepted, because the evidence from the current labour market transition suggests it is not only incomplete but in important respects wrong.

The professionals who have spent their careers navigating resource constraints, building solutions with inadequate tools, making decisions with incomplete information, and finding paths through problems that did not come with established playbooks have been developing a specific capability set that the AI transition is making more valuable, not less. The irony is that many of them do not recognise what they have built, because they have been measuring themselves against a standard that was designed for a different kind of professional environment.

What resource-constrained environments actually develop

A 2025 study by researchers at the Indian School of Business and the London Business School examined the performance of professionals from high-growth emerging market backgrounds in AI-augmented work environments, comparing their output quality, adaptability, and capability development rates against peers from more resource-rich professional backgrounds across eighteen months of observation.

The findings cut against the conventional narrative in ways that are worth examining carefully.

Professionals from resource-constrained backgrounds showed significantly higher performance on tasks requiring adaptive problem-solving under ambiguity, specifically tasks where the available tools were insufficient for the stated objective and the professional had to find a creative path to an acceptable output. The researchers attributed this to what they described as constraint-trained improvisation: the habit of mind that develops when you cannot assume that the right tool for the job is available and must instead ask what the available tools can be made to do.

This habit of mind is, it turns out, precisely what working effectively with AI requires. AI tools are powerful and imprecise. They produce outputs that are often useful but rarely exactly right. Using them well requires the ability to evaluate what is available, identify the gap between the output and the requirement, and find a creative path to closing that gap through iteration, reprompting, and recombination of outputs in ways the tool did not anticipate. These are the skills that constraint-trained professionals have been developing for years.

The same study found that professionals from resource-constrained backgrounds showed faster capability development with new AI tools than their resource-rich counterparts, with the researchers attributing this to a lower threshold for tolerating ambiguity and failure during the learning phase. Professionals accustomed to working with inadequate tools are less destabilised by tools that do not work perfectly, and therefore experiment more freely and learn faster.

The jugaad advantage

The Hindi concept of jugaad, which translates approximately as frugal innovation or the creative workaround, has been studied extensively in the management literature as a distinctive approach to problem-solving that emerges from environments where conventional resources are unavailable. It is not, in the academic literature, treated as a second-best substitute for proper resources. It is treated as a genuinely distinct cognitive approach that produces different and in some contexts superior outcomes to conventional resource-intensive problem-solving.

A 2023 analysis by McKinsey of innovation patterns across 140 companies in emerging and developed markets found that organisations with high concentrations of jugaad-oriented problem-solvers were significantly more effective at deploying AI tools in novel applications, specifically applications that went beyond the tool's intended use case to solve problems the tool was not explicitly designed for. The researchers described this as use-case creativity: the ability to look at a tool's capabilities and see applications that the tool's designers did not anticipate.

Use-case creativity is one of the highest-value capabilities in the current AI landscape, because the organisations extracting the most value from AI are not primarily those using AI tools for the applications those tools were designed for. They are those finding novel applications that their competitors have not yet identified. The professionals driving those discoveries are disproportionately those with a trained habit of looking at available tools and asking what else they can be made to do.

The specific capabilities that transfer

The advantage described above is not generic. It manifests in specific capabilities that are worth identifying precisely, because precision is what turns an abstract advantage into a concrete career strategy.

The first is tolerance for imperfect tools. Professionals from resource-constrained environments have spent their careers making decisions with insufficient data, building solutions with inadequate tools, and delivering outputs under conditions that professionals in better-resourced environments would consider unacceptable. This tolerance is directly transferable to AI-augmented work, where the tools are powerful but imprecise and where the professional who can work productively with an imperfect output is significantly more effective than one who is blocked by the output's limitations.

The second is first-principles reasoning. When established playbooks are not available, you develop the habit of reasoning from first principles rather than from precedent. This is precisely the cognitive mode that produces effective prompt engineering, because effective prompts are not constructed by following templates. They are constructed by reasoning from first principles about what the model needs to produce a useful output in this specific context.

The third is outcome orientation over process orientation. Professionals who have built careers without access to established processes have learned to stay focused on the outcome and find whatever path leads there, rather than following the process and hoping the outcome follows. This orientation transfers directly to AI-assisted work, where the value is in the quality of the final output and the professional who can use AI to reach that output efficiently, regardless of whether the path followed the anticipated one, consistently outperforms the one who is process-dependent.

The fourth is comfort with iteration. Building something in multiple imperfect passes, improving it incrementally rather than producing it correctly the first time, is a habit developed in environments where the first attempt is rarely resourced well enough to be the final one. It is also the exact cognitive pattern that effective AI use requires, where the first output from a prompt is a starting point and the value is in knowing how to iterate toward something better.

What this does not mean

This argument has limits that are worth stating clearly, because overstating it would be as misleading as the conventional narrative it is pushing back against.

Resource-constrained professional environments also produce real disadvantages that the AI transition does not automatically eliminate. Network effects in hiring and advancement remain real and remain structured in ways that advantage professionals with access to stronger institutional networks. The signalling value of credentials from prestigious institutions remains meaningful in many hiring contexts regardless of the underlying capability of the candidate. Access to the highest-end AI tools, and to the organisational contexts where the most interesting AI applications are being developed, remains unevenly distributed in ways that correlate with the resource advantages that are already unevenly distributed.

The argument is not that the playing field is level or that constraint-trained advantages automatically overcome structural disadvantages. The argument is narrower and more specific: the cognitive capabilities developed in resource-constrained professional environments are more directly applicable to AI-augmented work than is commonly recognised, and the professionals who have developed them are underestimating the value of what they have built.

Underestimating it produces a specific and correctable error. It leads professionals to compensate for a perceived deficit rather than to build on a genuine asset. The correction is not optimism. It is accurate assessment.

The action

Write down three specific situations in your professional history where you solved a problem without adequate resources, tools, or information. For each one, identify precisely what cognitive approach you used to navigate the constraint. Do not describe the outcome. Describe the thinking.

Then map each of those cognitive approaches to the AI capability framework from Issues #15 and #21: task classification, critical evaluation of imperfect outputs, iterative improvement, and outcome-oriented problem-solving.

The professionals who have been telling themselves they are working at a disadvantage will, in most cases, find that the mapping is closer than they expected. That mapping is not comfort. It is strategy. The question is whether you build on it deliberately or continue measuring yourself against a standard that was never designed with your specific capabilities in mind.

Thursday we give you the prompt stack that turns this kind of asset mapping into a concrete professional positioning statement, one specific enough to use in interviews, performance conversations, and the increasingly common situation where you are asked to articulate not just what you have done but how you think.

How you think is, increasingly, the answer that matters.

— The Artificial Idea team

Keep Reading