I’m going to describe a scene you’ve probably lived.
You have one last performance review meeting. The form is open. Your WhatsApp is pinging. Your employee is waiting for clarity, maybe money, maybe direction, maybe both.
And you’re thinking: What questions should I ask in a performance review that won’t waste both our time?
This article gives you a practical question bank, a simple process, and copy-paste scripts you can use in African workplaces where KPIs are sometimes unclear, documentation is patchy, and culture can make honest feedback feel risky.
What questions should I ask in a performance review? Start with the outcome you want
A performance review question is only “good” if it does one of these things:
- Clarifies expectations (what “good” looks like in this role)
- Surfaces evidence (examples, outputs, outcomes, stakeholder feedback)
- Identifies constraints (tools, approvals, bandwidth, shifting priorities)
- Creates an action plan (what changes, who owns it, by when)
If your questions do not end in clarity and actions, they will feel like a yearly ritual and nothing else.
CIPD’s evidence review on performance feedback repeatedly comes back to a boring truth: feedback works better when it is specific, actionable, and handled in a way that considers the recipient’s reaction.
The plain-English definition
A performance review is a structured conversation (and record) about:
- what happened this cycle (results and behaviors),
- why it happened (skills, constraints, support),
- what should happen next (goals, development, decisions).
Why it matters (the cost of vague questions)
When review questions are weak, you get predictable outcomes:
- People feel the process is political.
- Pay and promotion decisions get challenged.
- High performers disengage because effort is not linked to impact.
- Underperformance lingers because nobody named it clearly.
- Managers lose credibility.
That “credibility” point isn’t fluffy. Research on feedback conversations shows recipients often doubt the accuracy of mixed or negative feedback, and they judge the person delivering it. The questions you ask can lower defensiveness by anchoring the conversation on evidence and improvement.
The mistakes that make performance review questions useless
Mistake 1: Asking “How are you doing?” instead of “What evidence do we have?”
If you ask only feelings-based questions, you get vibes, not decisions.
Instead, you want prompts that force concrete examples:
- “What deliverables shipped?”
- “What changed because you did your work?”
- “Which stakeholder would notice if you weren’t here?”
Mistake 2: Mixing pay talk and development talk with zero structure
When money is on the table, people edit themselves. That’s normal.
So separate the meeting into parts:
- Part A: evidence and performance
- Part B: growth plan
- Part C: comp/promotion (or the timeline for it)
Even HBR’s work on performance reviews and motivation highlights how the design of reviews affects engagement and how people receive the message. If your questions feel like a trap, you’ll get safe answers.
Mistake 3: Vague goals (do-your-best goals)
A common reason review questions feel awkward is because goals were never clear.
Specific and challenging goals tend to outperform “do your best” goals, because “do your best” has no shared standard.
So some of your best review questions are actually goal-clarification questions.
Mistake 4: Bias and rater errors (halo, leniency, recency)
Even competent managers fall into rater errors:
- Halo effect (one strength colors everything)
- Leniency (everyone is “great”)
- Central tendency (everyone is “average”)
- Recency (last month outweighs the whole cycle)
A practical mitigation is structured questions per dimension, so you rate less from memory and more from evidence.
Mistake 5: No action plan, no follow-up, no documentation
In many African orgs (especially SMEs), the hidden constraint is documentation. People do work, but evidence is scattered across email, Slack, WhatsApp, shared drives, and someone’s laptop.
Your questions must end in written commitments:
- what changes,
- what support is needed,
- what gets tracked monthly.
Step-by-step: how to prepare your questions (even if you’re busy)
You don’t need a perfect system. You need a repeatable one.
Step 0: Decide what this review is for
Pick one primary purpose:
- Development (skills, coaching, growth)
- Decision (confirmation, promotion, PIP, compensation inputs)
- Alignment (reset priorities, fix role clarity)
If you try to do all three at full depth in 45 minutes, you’ll do none well.
Constraint acknowledgement #1: Time is limited. So choose your purpose and cut the rest.
Step 1: Pull evidence (even if KPIs are messy)
Bring a one-page “evidence pack”:
- goals or priorities you agreed earlier
- key deliverables shipped
- quality indicators (errors, rework, customer complaints)
- stakeholder notes (2–4 short inputs)
- learning or stretch work done
If you have no KPIs, you can still use output evidence. The table below gives options.
Constraint acknowledgement #2: In many teams, KPIs are unclear or change mid-quarter. You can still review performance using deliverables, quality, and reliability signals.
Step 2: Ask questions in 5 buckets
Pick 12–18 questions total.
Use these buckets:
- Results and impact
- Execution quality and reliability
- Collaboration and behaviors
- Constraints and support
- Next-cycle plan and development
Step 3: Convert answers into a one-page plan
At the end, you want:
- 3 priorities for the next cycle
- 2 skills to build
- 2 support actions from the manager
- 1 measurable checkpoint per month
If you’re using a platform like Talstack’s Performance Reviews and Goals modules, the advantage is you can keep goals, feedback, and evidence in one place instead of hunting for it during review season. The process still works without the tool, but the admin load is lower when the workflow is structured.
The question bank (pick 12–18, not 60)
Below are question sets. Use the ones that match your situation and seniority level.
Questions about outcomes and impact (results)
Use these when you need clarity on contribution.
- What were your 3 most important outputs this cycle? Link each to a business or team outcome.
- Which goal did you move the most? What evidence shows that progress?
- What did you ship that reduced cost, reduced risk, increased revenue, or improved customer experience?
- Which deliverable took longer than expected, and why?
- If we removed one of your responsibilities next quarter, which would create the most business risk?
Questions about quality, reliability, and execution
These are useful when output exists, but the team still feels friction.
- Where did rework show up most often (your work or the system around you)?
- What quality standard did you personally enforce this quarter? Give an example.
- What’s one recurring mistake you’ve noticed in your workflow? What would prevent it?
- How predictable was your delivery (deadlines, handoffs, approvals)? What made it predictable or unpredictable?
Questions about collaboration and stakeholder trust
These help when work is cross-functional and political.
- Which stakeholders were easiest to work with, and why?
- Where did alignment break down (handoff, approvals, unclear decision maker)?
- What feedback have you received from peers or customers that you agree with?
- What feedback do you think is unfair or incomplete? What evidence would change your mind?
- If your team had to scale 2x, what collaboration habit would break first?
CIPD emphasizes that reactions to feedback matter and suggests checking how the person feels about feedback and whether they see it as fair and actionable. These questions do that without turning the meeting into therapy.
Questions about constraints (Africa-realistic)
These questions are the difference between “improve performance” and “fix the system.”
- What slowed you down that was outside your control (tools, approvals, staffing, power, internet, procurement)?
- Which process step wastes the most time each week? If we fixed it, what would we gain?
- Where did priorities change midstream? What decision rule should we use next time?
- What resources do you need to hit the next targets: time, budget, access, training, or authority?
- What did you stop doing this quarter, and was that the right trade-off?
Constraint acknowledgement #3: Culture can make honest constraint talk hard. In high power-distance settings, employees may avoid saying “my manager is the blocker.” These prompts let them name systemic issues without attacking a person.
Questions about growth, skills, and career direction
- What skill, if you improved it by 20%, would change your output the most?
- Which tasks give you energy, and which consistently drain you?
- What project would stretch you in a healthy way next cycle?
- What training would you actually finish, given your workload?
- Where do you want to be in 12–18 months, and what role experiences would prove you’re ready?
This is where structured learning helps. If you have recurring skill gaps, Talstack’s Learning Paths, Assign Courses, and Course Catalogue are designed for targeted upskilling rather than random training days.
Questions to ask your manager (upward clarity)
If you’re the employee reading this, these are strong questions that keep the review grounded.
- What are the top 2 expectations I am not meeting yet?
- What does “excellent” look like in this role here, not in theory?
- Which metric or outcome matters most for my role next quarter?
- When I do good work, what do you want me to repeat?
- What is one behavior I should stop because it creates risk or friction?
- What would make you confident recommending me for promotion, and what’s the timeline?
Questions for self-review (self-assessment prompts)
Self-assessments work best when they force evidence.
- What did I accomplish that I can point to (links, numbers, artifacts)?
- What did I learn that changed how I work?
- Where did I drop the ball, and what pattern caused it?
- What is the one constraint I didn’t manage well?
- What support did I fail to ask for early enough?
Tables (use these to pick the right questions fast)
Table 1: Match your situation to the best questions
| Situation |
Ask these questions |
What you’re trying to learn |
What to do next |
| KPIs are unclear or keep changing |
“What were your top 3 outputs?” “What evidence shows impact?” “What changed midstream and why?” |
Real contribution despite messy targets |
Write 3 priorities for next cycle + monthly checkpoints |
| You feel output is fine but reliability is poor |
“Where did rework happen?” “What made deadlines slip?” “What would make delivery predictable?” |
Root causes: skill vs system vs workload |
Agree on one process change + one skill focus |
| Collaboration complaints or politics |
“Where did alignment break?” “Which stakeholder feedback do you accept?” “What’s unfair and what evidence would help?” |
Whether it’s behavior, expectations, or context |
Define 2 collaboration behaviors + get 360 input next cycle |
| Underperformance without clear incidents |
“What were the committed deliverables?” “Which ones missed and why?” “What support was requested and when?” |
Pattern, accountability, and support gaps |
Document expectations + 30/60/90 plan |
Table 2: Evidence sources when documentation is messy
| Evidence source |
Examples |
What it proves |
Risk if you ignore it |
| Work artifacts |
Decks, reports, commits, tickets, SOPs, customer emails |
Actual output and quality |
Ratings become memory contests |
| Reliability signals |
On-time delivery, rework rate, escalations, error logs |
Execution discipline |
You reward “busy” over “reliable” |
| Stakeholder input |
2–4 short notes from peers, clients, internal partners |
Collaboration and trust |
Blind spots, politics later |
| Goal progress |
Quarterly priorities, OKRs, scorecards |
Alignment to what mattered |
Work drifts from strategy |
If your org still runs reviews in spreadsheets, you can replicate this “evidence pack” logic by forcing links to artifacts and short stakeholder inputs. If you want less admin friction, Talstack’s Performance Reviews, Goals, and Analytics make that evidence easier to pull at review time.
Quick Checklist (use this 10 minutes before the meeting)
- I know the primary purpose (development vs decision vs alignment).
- I have 5–8 pieces of evidence (links, numbers, artifacts).
- I picked 12–18 questions from the bank, not a massive list.
- I can name 2 strengths with examples.
- I can name 1–2 improvement areas with examples.
- I have 3 priorities for next cycle ready to propose.
- I have 2 support actions I’m willing to commit to as manager.
- I have a plan to document outcomes in a one-page summary.
For review quality, structure matters. SHRM’s guidance on conducting reviews emphasizes being specific about observed behavior and impact, then asking for the employee’s input. That’s what this checklist operationalizes.
Copy-paste scripts (keep it calm, keep it specific)
Script 1: Opening the review (sets tone and structure)
“Thanks for making time. I want this to be practical. We’ll do three things: review evidence from the cycle, agree what should change next, and document a simple plan. If anything feels unclear or unfair, say it and we’ll go back to examples.”
Script 2: Asking for evidence without sounding accusatory
“Walk me through the work you’re most proud of. If you can, point to the artifact or result. I’m trying to anchor this conversation on evidence, not memory.”
Script 3: Giving tough feedback (behavior + impact + next step)
“I noticed [specific behavior or pattern]. The impact was [specific impact]. For next cycle, I want us to try [specific change]. What would make that doable for you?”
This aligns with evidence-based feedback practice: specific information tied to task and actionability tends to be more useful than vague evaluation.
Script 4: Deferring pay talk without dodging it
“I know compensation matters. Today I want us to focus on performance evidence and the plan. I will confirm the compensation timeline by [date/process] and we’ll have a separate conversation if needed.”
Script 5: Resetting goals when priorities keep shifting
“Given what changed this quarter, I don’t want to rate you against goals that no longer make sense. Let’s rewrite the next 3 priorities with clear measures and checkpoints.”
Goal clarity is not a nice-to-have. The evidence base on goal setting consistently favors specific, challenging goals over vague ones.
FAQ (the questions people actually Google)
1) What questions should I ask in a performance review if I’m the employee?
Use questions that force clarity:
- “What are the top 2 expectations I’m not meeting yet?”
- “What does excellent look like in this role here?”
- “Which outcomes should I prioritize next quarter?”
- “What would make you confident recommending me for promotion?”
2) How many questions should I ask in a performance review?
Enough to cover the 5 buckets, not enough to derail the meeting.
A good range is 12–18 questions in a 45–60 minute review.
3) What if my company’s performance review is just a rating form?
You can still run a good conversation.
Use the rating form as a record, but ask:
- “What evidence supports this rating?”
- “What would move this rating up next cycle?”
- “What support is required?”
Also, be aware of rater errors like halo and leniency, especially when the rating is the only output.
4) What if my manager doesn’t give clear feedback?
Ask for specifics:
- “Can you give one example of where I exceeded expectations?”
- “Can you give one example of where I fell short?”
- “What would you like me to repeat next quarter?”
If the feedback feels like a verdict, not input, you’ll struggle to act on it. Research on feedback conversations shows perceived credibility and accuracy matters a lot.
5) Should I ask about salary in a performance review?
You can, but be strategic.
If your company separates pay cycles from performance cycles, ask:
- “What is the compensation timeline?”
- “What evidence and criteria are used for pay decisions here?”
If pay is decided in the same meeting, keep it near the end and keep evidence-based.
6) How do I ask performance review questions in a culture where people avoid conflict?
Make it about work, not personality:
- “What examples do we have?”
- “What would success look like?”
- “What constraint made this hard?”
- “What change would improve output next cycle?”
This reduces the social threat while still getting truth.
7) What’s the best structure for performance review questions: strengths vs weaknesses?
Avoid “weakness hunting.”
Use:
- Strengths with examples (what to repeat)
- Improvement areas with examples (what to change)
- Next-cycle plan (what you will do)
HBR’s reporting on performance reviews and motivation highlights how review design affects engagement and behavior. Your structure is part of that design.
8) What if we want to move away from annual reviews?
Many orgs are shifting to more frequent check-ins and simpler performance management designs. Deloitte’s work is one widely cited example.
If you’re not ready for a full redesign, start with one change:
- monthly check-ins with 3 bullets (wins, blockers, next priorities)
- quarterly evidence review for decisions
One next step
Pick one upcoming performance review and do this:
- Choose 15 questions from the bank.
- Build a one-page evidence pack.
- Use the opening script.
- End the meeting with 3 priorities, 2 skill moves, and 2 manager support actions written down.
If you want the same structure without spreadsheet chaos, you can map this workflow into Talstack using Goals (to align priorities) and Performance Reviews (to capture evidence, feedback, and next actions in one place).