Artificial Idea | AI careers · practical prompts · no hype Thursday, August 14, 2025 · Issue #4 · Prompt Tutorial
The core technique
Most people write prompts the way they write Google searches. That is the entire problem. Here is how to fix it.
On Monday we mapped the labour market ,which roles are shrinking, which are growing, and where the real opportunity sits for professionals who engage with AI tools seriously. The through-line of that piece was this: AI fluency combined with deep domain knowledge is the combination the market is currently paying a premium for.
This issue is about building the fluency half of that equation.
Specifically, it is about the single most important skill in working with AI: writing a prompt that actually tells the model what you need. Not what you typed. What you need. The gap between those two things is where most people's frustration with AI tools lives ,and closing it does not require technical knowledge. It requires a framework, some practice, and the willingness to treat prompting as a skill worth developing rather than a thing you either get or you don't.
The framework is four parts. Learn it once and it applies to everything.
Why most prompts fail
Before the framework, it is worth understanding precisely why vague prompts produce vague outputs ,because once you understand the mechanism, the solution becomes obvious.
AI language models do not read your mind. They read your words and generate a statistically likely continuation of them. When you write "summarise this report," the model has to make dozens of implicit decisions: How long should the summary be? Who is the audience? What format ,bullet points or prose? Should it prioritise conclusions or methodology? Should it flag caveats or strip them out for clarity?
With no guidance from you, the model picks the most average answer to each of those questions. Average audience. Average length. Average format. The result is output that is technically responsive but practically useless ,because it is calibrated to a generic reader, not to you and your specific situation.
The solution is not to hope the model guesses correctly. The solution is to stop making it guess.
The 4-part formula
Every effective prompt contains four elements. You do not always need all four in equal depth ,simple tasks need less scaffolding ,but knowing all four and applying them deliberately is what separates outputs you can use from outputs you have to completely rewrite.
Part 1: Role
Tell the model what it is, for the purposes of this conversation. Not what AI is in general ,what it is right now, in this specific exchange.
"You are a senior financial analyst reviewing an investment memo." "You are an experienced HR manager helping me navigate a difficult conversation." "You are a plain-language editor whose job is to make complex writing accessible."
This single instruction changes the vocabulary the model uses, the assumptions it makes about your sophistication, the level of detail it includes, and the tone it adopts. It is the highest-leverage single line in any prompt.
A useful test: if you removed the role instruction from your prompt, would the output change significantly? It almost always would. That is how much work this one line is doing.
Part 2: Context
Give the model the background it needs to give you a relevant answer. Who is involved? What has already happened? What is the broader situation? What constraints exist?
The instinct is to keep this short. Resist it. More context produces more relevant output, consistently and significantly. A prompt with three sentences of context outperforms the same prompt with one sentence almost every time.
The relevant question is not "how much should I write?" It is "what would I tell a smart new colleague who was helping me with this for the first time?" That is exactly the level of context a well-functioning prompt needs.
"I am a marketing manager at a mid-sized B2B software company. We are launching a new product in September targeting CFOs at enterprise companies. Our current messaging tests well with CTOs but falls flat with financial stakeholders. I need to rework the value proposition."
That context produces a fundamentally different ,and more useful ,output than "help me with my product messaging."
Part 3: Task
State precisely what you want the model to produce. Not the topic ,the deliverable. The format. The length. The specific action.
"Write a 300-word value proposition paragraph." "Give me five alternative subject lines." "Identify the three weakest arguments in this document and explain why each one is weak." "Rewrite this in plain English for a non-technical audience."
Ambiguity in the task instruction is the second most common reason prompts fail, after insufficient context. "Help me with this" is not a task. "Rewrite the conclusion section so that it leads with the key recommendation rather than building to it" is a task.
Part 4: Constraints
Tell the model what to avoid, what format to use, what tone to adopt, what length to stay within, and what assumptions not to make.
"Do not use jargon. Do not exceed 200 words. Do not recommend solutions that require budget approval ,assume we have no additional budget." "Write in a professional but direct tone. Avoid corporate language. No bullet points ,I need flowing prose." "Flag anything you are uncertain about rather than presenting it as fact."
That last constraint ,telling the model to flag uncertainty rather than paper over it ,is one of the most practically important instructions you can include in any prompt that involves factual claims. AI models are fluent in confident incorrectness. Explicit uncertainty flagging partially counteracts that tendency.
The formula in action
Here is a before and after using the same underlying request.
Before: "Write an email to my boss about the project delay."
After:
Role: You are a professional communication coach helping me write
a difficult workplace email.
Context: I manage a product launch project that is running three
weeks behind schedule due to delays from an external vendor. My
manager is results-focused and dislikes excuses. She will want to
know the impact, what is being done about it, and the new timeline.
We have a good working relationship.
Task: Write a complete email informing her of the delay, explaining
the cause briefly, and presenting a revised timeline with two
specific actions already underway to prevent further slippage.
Constraints: Keep it under 250 words. Lead with the new timeline,
not the explanation. Professional tone ,direct, not apologetic.
No subject line needed.
The second prompt takes ninety seconds longer to write. The output requires a fraction of the editing the first one would need. That trade-off, applied consistently across your working week, is where the time savings accumulate.
One thing to practise this week
Take a prompt you have already used ,one that produced output you had to heavily edit ,and rebuild it using the four-part framework. Role, context, task, constraints. Run it again and compare the outputs.
You will not need to do this exercise twice to understand why the framework works. The difference in output quality is immediate and, typically, significant.
Next Monday we are addressing one of the most searched questions in the AI and careers space right now: is your specific job actually at risk? We are going profession by profession ,not in vague terms, but with the actual data on automation exposure by role. It is one of the most practically useful pieces we will publish this year.
Bring your job title.
,The Artificial Idea team

