Artificial Idea | AI careers · practical prompts · no hype Thursday, December 4, 2025 · Issue #36 · Prompt Tutorial
The 2026 toolkit
How to build a personal system prompt that makes AI sound like you
Most professionals use AI as a generic tool that produces generic outputs. The professionals pulling ahead have figured out how to make it an extension of how they specifically think and work. Here is the framework.
Issue #35 closed out the 2025 data picture with three forward-looking observations: displacement will accelerate modestly, the salary premium will continue bifurcating toward depth rather than breadth, and AI governance will become a significant source of professional demand. This issue translates those observations into a practical toolkit for the year ahead.
The toolkit is built around a concept that has been implicit in every Thursday issue this newsletter has published but has not yet been addressed directly: the system prompt. Not as a technical concept, but as a professional one. The idea that the most effective way to use AI tools is not to start every interaction from scratch but to build a persistent set of instructions that shapes every interaction toward your specific professional context, your specific working style, and the specific outputs that are useful to you rather than useful to the average user.
This is the capability that separates the professionals using AI at a surface level from those using it as a genuine extension of their professional capability. It is also, based on the five most-demanded capabilities in the Burning Glass data from Issue #33, the most directly connected to the salary premium at the upper end of the distribution. Building a personal system prompt is not a technical task. It is a self-knowledge task. It requires understanding how you think, how you work, and what distinguishes your professional judgment from the generic outputs a model produces without that context.
What a system prompt is and why it matters
A system prompt is a set of instructions that shapes how an AI model responds before any specific conversation begins. In technical deployments, system prompts are used by developers to configure how a model behaves for a specific application. In professional use, they function as a persistent context layer that tells the model who you are, how you think, what you need, and what you do not need, so that every interaction starts from a base that is calibrated to your specific context rather than to the average user.
The difference in practical terms is significant. A professional who opens ChatGPT or Claude without a system prompt is working with a model calibrated to produce outputs useful to anyone. A professional who opens the same tool with a well-constructed system prompt is working with a model calibrated to produce outputs useful to them specifically, grounded in their professional context, their working style, and the standards their work is held to.
Most conversational AI tools allow users to set custom instructions that function as a persistent system prompt. In ChatGPT this is the Custom Instructions feature. In Claude it is the System Prompt field in projects. The feature exists, it is accessible to any user, and most professionals have never used it in a systematic way.
The prompts below build a personal system prompt in components. Each component addresses a different dimension of the professional context the model needs to produce genuinely useful outputs. Together they constitute a framework that can be constructed in approximately forty minutes and that improves the quality of every AI interaction that follows.
Component 1: Professional identity and context
The problem it solves: ensuring the model understands who you are and what you do with enough specificity that it does not default to generic professional assumptions.
Help me write the professional identity
section of a personal system prompt
that I will use to configure my AI
tools for professional work.
Please ask me the following questions
one at a time and use my answers
to draft this section:
1. What is your professional role
and what does it actually involve
day to day, beyond the job title?
2. What industry or sector do you
work in and what are the two or
three most important things
someone needs to understand
about it to give you useful advice?
3. Who are the main audiences for
your professional work: who reads
what you write, who attends
your meetings, who evaluates
your output?
4. What professional standards
or constraints govern your work:
regulatory requirements,
organisational style guides,
professional ethics,
or client confidentiality
considerations?
5. What does good work look like
in your specific role,
the standard you are
actually held to rather
than the generic standard
for your job title?
After I answer all five questions,
draft a professional identity
section of 150 to 200 words
that gives an AI model the
context it needs to produce
outputs useful to me specifically.
The instruction to ask questions one at a time is the constraint that makes the output genuinely reflective of the individual's specific context rather than a generic professional profile. When asked to describe themselves in writing, most professionals default to the language of their job description. When answering specific questions in sequence, they produce the specific, idiosyncratic detail that distinguishes useful system prompt context from generic role description.
Component 2: Working style and output preferences
The problem it solves: configuring the model's default output style to match how you actually work rather than how AI tools default to producing outputs.
Help me write the working style section
of my personal system prompt.
Ask me these questions one at a time:
1. How do you prefer to receive
information: comprehensive and
detailed, or concise and headline-first?
2. What format works best for you:
flowing prose, structured bullet points,
tables, numbered steps, or a mix
that depends on the task?
3. What professional tone do you
need your outputs to reflect:
formal and precise,
direct and conversational,
analytical and evidence-based,
or something more specific
to your context?
4. What are the three things AI
outputs most commonly do that
you find least useful or
that require the most editing?
5. What is the single most important
thing an AI tool could do differently
that would make its outputs
more immediately usable in your work?
After I answer, draft a working style
section of 100 to 150 words that
configures the model's default
behaviour to match my preferences.
Question four, what AI outputs most commonly do that requires the most editing, is the question that produces the most practically useful input for this component. Most professionals have developed a consistent set of editing interventions they apply to AI outputs. The patterns in those interventions reveal the systematic mismatches between how the model defaults to producing content and how the professional needs to use it. Encoding those corrections into the system prompt eliminates the editing before it is needed.
Component 3: Domain knowledge and expertise signals
The problem it solves: communicating to the model the level and type of domain expertise you bring to your work, so that it does not explain things you know or assume knowledge you do not have.
Help me write the domain expertise
section of my personal system prompt.
Ask me:
1. What topics or domains do you
know well enough that you do not
need the model to explain
background or context?
2. What topics or domains are you
working in but are not deeply
expert in, where you need
the model to provide more
context and to flag uncertainty?
3. What terminology, frameworks,
or mental models do you use
regularly in your work that
the model should use when
communicating with you?
4. What are the most common
misconceptions about your field
that you would not want
the model to reproduce?
5. What sources, publications,
or thinkers do you respect
most in your professional domain,
whose standards you would
want the model's outputs
to be calibrated toward?
Draft a domain expertise section
of 100 to 150 words from my answers.
Question four, common misconceptions about your field, is the one most likely to produce immediate improvement in output quality. AI models reproduce the most common framing of any topic, including the framings that domain experts consider oversimplified or incorrect. Explicitly naming those framings in the system prompt creates an instruction to avoid them that persists across every interaction.
Component 4: Decision-making and analytical preferences
The problem it solves: configuring the model to support your specific decision-making style rather than defaulting to the most common analytical frameworks regardless of their fit for your context.
Help me write the analytical preferences
section of my personal system prompt.
Ask me:
1. How do you typically approach
complex decisions: do you start
with data and build toward
a conclusion, or start with
a hypothesis and test it?
2. What kinds of analysis do you
find most useful: scenario planning,
root cause analysis,
comparative evaluation,
risk assessment, or something else?
3. How do you want the model
to handle uncertainty:
should it state its confidence level,
flag where it is speculating,
and acknowledge the limits
of what the available information
supports?
4. Do you want the model to
proactively challenge your
assumptions and push back
on your framing, or to
work within the framing
you have provided unless
it is clearly incorrect?
5. What is the most common
analytical error you make
that you would want
the model to help you avoid?
Draft an analytical preferences section
of 100 words from my answers.
Question five, the most common analytical error the professional makes, requires a level of self-awareness that most people find uncomfortable and that produces the most valuable input for this component. The model cannot correct for errors it does not know to look for. Naming them explicitly creates a persistent instruction that functions as an external check on the professional's most consistent blind spots.
Component 5: Boundaries and constraints
The problem it solves: establishing what the model should not do in interactions with you, including the defaults it should avoid, the topics it should treat with specific care, and the outputs it should flag rather than produce without comment.
Help me write the constraints section
of my personal system prompt.
Ask me:
1. What kinds of outputs should
the model never produce without
flagging them explicitly:
factual claims it is uncertain about,
recommendations outside your
professional domain,
content that touches sensitive
client or organisational information?
2. What professional or ethical
constraints govern your work
that the model should be
aware of and respect?
3. What topics should the model
approach with specific caution
in your professional context,
where the standard treatment
in AI outputs is inadequate
for your specific requirements?
4. What is the most important
thing the model should always
do when it is uncertain,
rather than defaulting to
a confident-sounding response?
5. What should the model do
when it does not have
enough information to
give a genuinely useful response,
rather than producing
a generic answer that
looks helpful but is not?
Draft a constraints section of
75 to 100 words from my answers.
Question four, what the model should do when uncertain rather than defaulting to confidence, is the constraint most critical to the professional use cases where AI outputs are most consequential. The default behaviour of language models is to produce fluent, confident-sounding responses regardless of the reliability of the underlying information. An explicit instruction to flag uncertainty rather than paper over it changes that default in every interaction where it is in force.
Assembling the system prompt
Once the five components have been drafted through the five prompts above, use this final prompt to assemble them into a coherent whole.
I have drafted five components
of a personal system prompt
through a structured process.
Please assemble them into
a coherent, well-structured
system prompt that:
1. Flows naturally as a set
of instructions rather than
five disconnected sections
2. Eliminates any redundancy
between sections
3. Prioritises the instructions
that are most important
to my specific professional context
4. Is concise enough to function
effectively as a system prompt
without being so long that
it creates confusion about
which instructions to prioritise
Here are the five components:
[paste all five drafted components]
After assembling the prompt,
tell me the three instructions
that are doing the most work
and that I should review first
if the outputs are not
meeting my expectations.
The instruction to identify the three instructions doing the most work is the one that makes the assembled prompt a working document rather than a completed one. System prompts require iteration. They are built from self-knowledge that is imperfect on the first attempt and improves through use. Knowing which instructions are most influential makes iteration faster and more targeted than reviewing the whole prompt every time an output falls short.
The 2026 practice
The system prompt is not a set-and-forget configuration. It is a working document that should be reviewed and updated at least quarterly as your professional context evolves, your AI fluency deepens, and your understanding of where the tools are most and least useful in your specific work becomes more precise.
The professionals who get the most from AI tools over a multi-year period are those who treat their interaction with the tools as a practice with its own development curve, not a static capability that is either present or absent. The system prompt is the document that captures where that practice currently is and directs the tools accordingly. As the practice develops, the document improves. As the document improves, every interaction improves.
The forty minutes required to build the first version is the investment that makes every subsequent interaction more productive than it would otherwise be. It is also the investment that most directly produces the first of the five most-demanded capabilities in the Burning Glass data: the ability to direct AI tools toward outputs appropriate to your specific professional context rather than to the average user's context.
The average user's context is nobody's context. Yours is yours. The system prompt is how you communicate the difference.
Monday is the last issue of 2025. After thirty-six issues, five months, and more data than any single newsletter should contain, we are pausing at the year's end to look at the single most important thing the 2025 data says about where the AI and careers story is heading in 2026, and what it means for the specific professionals reading this.
It is not the conclusion the dominant narrative would predict. It rarely is.
— The Artificial Idea team

