Artificial Idea | AI careers · practical prompts · no hype Thursday, January 29, 2026 · Issue #51 · Prompt Tutorial

The feedback stack

How to use AI to give (and receive) better criticism

Most professional feedback is either too vague to act on or too direct to survive. These five prompts produce the kind that is neither.

Issue #50 made the case that the gap between your formal job description and your actual professional reality is a visibility problem with career consequences, and that closing it deliberately in the right direction is the mechanism by which AI capability development produces career return rather than just personal efficiency. This issue addresses the feedback infrastructure that tells you whether the closing is working.

Feedback is the most underutilised professional development resource available to most knowledge workers. Not because it is unavailable. Because the feedback that is available is almost never the feedback that is useful. The formal annual review produces carefully worded assessments calibrated to avoid conflict rather than accelerate development. The informal feedback from colleagues is filtered through relationship preservation instincts that soften every observation that might create awkwardness. The feedback from clients and senior stakeholders is either not sought at all or sought in forms so generic that the responses tell you nothing specific enough to act on.

The result is a professional operating largely on self-assessment, which the research on self-assessment accuracy consistently shows is the least reliable source of information about how your work lands with the people it is designed to land with.

AI does not fix the fundamental problem of feedback quality. It does something more modest and more immediately useful: it helps you design better feedback requests, process the feedback you receive more rigorously, deliver feedback to others more effectively, and simulate the feedback you are not receiving from the people whose assessment matters most.

Prompt 1: The feedback request designer

The problem it solves: designing feedback requests that produce specific, actionable responses rather than the generic positive assessments that most feedback requests elicit because they make it too easy for the respondent to be kind rather than useful.

Most feedback requests fail before they are sent because they ask the wrong questions in the wrong way. Questions like "what did you think of my presentation?" or "do you have any feedback on my proposal?" create an opening for a polite response rather than a useful one. The respondent fills the opening with the least uncomfortable content available, which is almost never the most useful content available.

You are helping me design a feedback request 
that produces specific, honest, actionable 
responses rather than polite generalities.

What I am seeking feedback on: 
[describe the work, presentation, proposal, 
project, or performance you want feedback on, 
with enough context for someone unfamiliar 
with it to understand what it was trying to achieve]

Who I am asking: 
[describe their role, their relationship to 
the work, what they observed or received, 
and what their honest assessment would be 
worth to me]

What I most need to know: 
[be honest about what you are most uncertain 
about or most want to improve, even if 
you would not state this directly in the request]

Please design a feedback request that:

1. Opens with a specific question rather 
   than a general invitation, targeted at 
   the aspect of the work most worth 
   understanding better
2. Includes one question that makes it 
   easy to give critical feedback by 
   framing it as a normal part of the 
   evaluation rather than a departure from politeness
3. Includes one question about what 
   specifically worked, framed to produce 
   a specific answer rather than a general 
   positive assessment
4. Closes with a question about what 
   I should do differently next time, 
   specific enough that the respondent 
   cannot answer it with a generic suggestion
5. Is short enough that the respondent 
   can answer it in five minutes without 
   feeling they owe me a comprehensive review

The feedback request should make it 
easier to be honest than to be kind. 
Most feedback requests do the opposite 
and produce the opposite result.

The instruction that the request should make honesty easier than kindness is the design principle most feedback requests violate. Kindness is the default because it is socially safer and requires less effort than specific critical observation. The only way to change the default is to design the request so that answering specifically is easier than answering generally, which means asking questions so specific that a vague positive response does not fit the question being asked.

Prompt 2: The received feedback processor

The problem it solves: extracting the maximum useful information from feedback you have already received, including feedback that was vaguely worded, diplomatically softened, or apparently contradictory.

Most professionals read feedback, form an immediate emotional response to it, and then either dismiss the parts that feel unfair or accept the parts that feel accurate without examining either reaction rigorously. Both responses leave significant useful information in the feedback unextracted.

You are helping me process professional feedback 
I have received in a way that extracts 
maximum useful information rather than 
confirming my existing beliefs about 
my own work.

The feedback I received: [paste or describe 
the feedback as completely as possible, 
including the exact wording where available]

The context: [what the feedback was about, 
who gave it, what their relationship to 
the work was, and what you know about 
their standards and communication style]

My initial reaction to the feedback: 
[describe honestly, including which parts 
you agree with and which you find 
unfair or inaccurate]

Please:

1. Identify the most specific and actionable 
   observation in this feedback, the one 
   most worth acting on regardless of 
   whether I agree with it
2. Identify any pattern in the feedback 
   that I might be minimising because 
   it is uncomfortable, and explain 
   why it might be worth taking seriously
3. Identify any part of the feedback 
   I might be over-weighting because 
   it confirms something I already believe 
   about myself, and whether that 
   self-assessment is well-founded
4. Translate any vague or diplomatically 
   softened observations into plain language: 
   what the feedback giver was most 
   likely trying to say without saying directly
5. Identify the single most important 
   change in my behaviour or output 
   that this feedback, taken seriously, 
   would produce

Do not tell me whether the feedback 
is fair or accurate. Tell me what 
is most useful in it regardless of 
whether it is fair or accurate.

The instruction not to assess whether the feedback is fair is the constraint that makes this prompt most valuable. The fairness question is the one professionals spend the most time on and the one least relevant to professional development. Unfair feedback can contain useful signal. Fair feedback can contain nothing actionable. The prompt redirects attention from the evaluation of the feedback giver to the extraction of useful information from whatever they said.

Prompt 3: The feedback simulator

The problem it solves: generating a simulation of the honest feedback you are not receiving from the people whose assessment matters most, based on what you know about them and what they have observed of your work.

This is the prompt that most directly addresses the feedback gap described in the introduction. The senior stakeholder who has formed a view of your work but will never share it directly. The client who has decided not to renew but whose specific reasoning you will never hear. The manager who writes careful annual reviews but holds specific observations they have never delivered in a direct conversation.

You are simulating the honest professional 
assessment of a specific person who has 
observed my work but is unlikely to share 
their genuine evaluation directly.

The person I am asking you to simulate: 
[describe their role, their professional 
standards, their communication style, 
their relationship to my work, and 
what they have directly observed]

What they have seen of my work: 
[describe the specific interactions, 
outputs, or performance they have 
been exposed to]

What I know or suspect they think: 
[describe any signals you have received, 
however indirect, about their assessment]

What I most want to understand: 
[what specific aspect of their assessment 
would be most useful to you if you 
could access it honestly]

Please simulate their honest assessment covering:

1. What they consider the strongest aspect 
   of my work based on what they have observed
2. What they consider the most significant 
   weakness or limitation based on the 
   same evidence
3. What they would say about me to a 
   colleague in a private conversation, 
   not in a formal feedback context
4. What would most change their assessment 
   in a positive direction if I were to 
   demonstrate it in the next ninety days
5. The question they have about me that 
   they have not asked directly and 
   that, if answered well, would most 
   advance their confidence in my capabilities

Flag where your simulation is speculative 
rather than grounded in the information 
I have provided. The useful output is 
grounded simulation, not invented assessment.

The instruction to flag where the simulation is speculative is what makes this prompt honest rather than just interesting. A fully invented assessment has no value. An assessment grounded in specific observable information, with speculative elements clearly labelled, has significant value as a planning tool even when it cannot be verified. The professional who uses it knows which parts to act on as probable and which parts to hold as hypothetical.

Prompt 4: The feedback delivery designer

The problem it solves: designing feedback for someone else that is specific enough to be useful, honest enough to be worth giving, and delivered in a way that produces development rather than defensiveness.

Delivering feedback well is a capability that most professionals have never been explicitly taught and that most organisations have never systematically developed. The result is feedback that is either so softened it communicates nothing or so direct it produces defensiveness that closes the receiver to the observation being made.

You are helping me design feedback for 
a specific professional that will be 
useful rather than diplomatic and 
honest rather than harsh.

The person I am giving feedback to: 
[their role, their seniority relative to mine, 
our working relationship, and their 
likely response to direct criticism 
based on what you know about them]

What I want to give feedback on: 
[describe the work, behaviour, or 
performance specifically, including 
what happened, what the impact was, 
and what you believe should change]

What I have tried before if anything: 
[describe any previous feedback attempts 
and their results]

What I most want to achieve: 
[a specific behaviour change, a development 
conversation, a performance correction, 
or something else specific]

Please design feedback that:

1. Opens with the specific observation 
   rather than a general assessment, 
   describing what happened rather than 
   what it says about the person
2. Connects the observation to its impact 
   in terms the receiver cares about, 
   not just in terms of what matters to me
3. Makes the desired change specific 
   enough that the receiver knows exactly 
   what doing it differently would look like
4. Creates space for the receiver's 
   perspective without making the space 
   so large that it becomes a negotiation 
   about whether the feedback is valid
5. Closes with a forward-looking commitment 
   rather than a backward-looking verdict

Draft the opening three sentences of 
this feedback conversation. The opening 
determines whether the rest of it 
lands as development or as judgment.

The instruction to draft the opening three sentences rather than the entire feedback script is deliberate. The opening is where most feedback conversations are won or lost, and it is the part most worth investing in. A feedback conversation that begins well has a significantly higher probability of producing the development outcome it was designed to produce than one that begins badly regardless of the quality of the content that follows.

Prompt 5: The 360 synthesiser

The problem it solves: synthesising feedback from multiple sources into a coherent developmental picture rather than treating each piece of feedback as an isolated data point that can be accepted, rejected, or averaged without examining what the pattern across all of it is saying.

You are helping me synthesise feedback 
from multiple sources into a coherent 
developmental picture.

Feedback sources and their content:
Source 1: [describe who and what they said]
Source 2: [describe who and what they said]
Source 3: [describe who and what they said]
Add additional sources as available.

My role and the professional context 
in which this feedback was given: [describe]

My own assessment of my performance 
in this period: [describe honestly]

Please:

1. Identify the themes that appear across 
   multiple sources, distinguishing between 
   themes that represent consistent signal 
   and those that may reflect a shared 
   blind spot in the feedback sources
2. Identify any significant divergence 
   between sources and what might explain it: 
   different exposure to different aspects 
   of my work, different standards, 
   or genuine disagreement about what 
   good looks like in my role
3. Identify the gap between my self-assessment 
   and the external assessments, specifically 
   where I rate myself higher than others do 
   and where I rate myself lower
4. Produce a single developmental priority 
   from this synthesis: the one change 
   that, if made, would most improve 
   my performance across the full range 
   of contexts the feedback covers
5. Identify the feedback source whose 
   assessment I am most tempted to dismiss 
   and make the case for why their 
   perspective might be the most valuable 
   one in this synthesis

The synthesis should produce one clear 
priority rather than a list of everything 
that could be improved. A developmental 
list is not a developmental plan.

The closing instruction, that the synthesis should produce one priority rather than a list, is the constraint that makes the output actionable rather than overwhelming. Most 360 feedback processes produce a comprehensive report that identifies everything worth developing, which is a different document from one that tells you what to work on next. Development that is spread across ten priorities simultaneously is development that advances on none of them.

The feedback practice

These five prompts are most valuable when used as a recurring practice rather than a one-time exercise. The professionals who reach the career inflection point described in Issue #47 most quickly are not those who develop the fastest in isolation. They are those who have the most accurate and current information about how their work is landing with the people who matter most, because that information directs their development toward the gaps that are actually limiting their career rather than the gaps they have assumed are limiting it.

The feedback practice does not need to be elaborate. Running Prompt 2 after every significant piece of feedback received, running Prompt 3 quarterly for the two or three stakeholders whose assessment matters most, and running Prompt 5 before every performance conversation is a practice that takes less than two hours per quarter and produces a consistently more accurate picture of how your professional work is perceived than the informal, filtered, diplomatically softened feedback most professionals are currently working from.

More accurate information produces better development decisions. Better development decisions produce faster progress toward the career inflection point. The practice is the infrastructure that connects those two things.

Monday we are examining the Block layoffs in detail: what four thousand eliminated roles actually had in common, what was kept and why, and what the decision criteria that produced that specific pattern reveal about how AI-driven restructuring actually works inside organisations doing it seriously rather than using AI as a cover for cost cutting they wanted to do anyway.

The headline number is the least interesting thing about that announcement. The pattern underneath it is the most instructive data point on AI-driven restructuring published so far.

— Team Artificial Idea

Keep Reading