Why employees hate performance reviews: the real reasons (trust, bias, vague feedback) and a practical fix you can run in 30 days for African teams.
Marketing Lead

March 24, 2026
•
7 Mins Read
If you’re an HR leader, you must have sat in the room where an employee is staring at the rating scale like it’s a court judgment.
They are not upset because they “hate accountability.”
They are upset because the review feels random, late, and weirdly personal. And if you run teams in Africa, you’ve probably seen the extra layer: hierarchy, politics, and low documentation make the “random” feel even more dangerous.
Here’s what’s usually underneath the hate.
Employees can handle tough feedback. They struggle with surprise feedback.
If your first serious conversation about performance happens in a quarterly or annual review, you have already lost trust. People interpret the review as punishment, not development.
When ratings decide pay, promotion, or who gets cut, people assume politics.
Sometimes they are right.
Sometimes it’s not politics, it’s just bad process: unclear standards, one manager’s memory, and no calibration. Either way, the employee experiences it as “political.”
People do not hate feedback. They hate low-value feedback.
Gallup describes a “feedback reputation problem” where employees want less frequent feedback because what they get is unclear, critical, or “going through the motions.”
That’s brutal, because frequent feedback can be linked to engagement when it is done well.
In many companies, managers have too many direct reports, too many projects, and too little structured tracking.
So the review becomes a memory contest. Whoever is most recent, most visible, or most confident wins.
This one hurts, because HR often didn’t design it to be annoying.
But if the review experience is mostly:
…employees don’t feel supported. They feel processed.
A performance review should be a decision meeting.
It answers:
Feedback, coaching, and growth conversations should happen outside the review, in regular check-ins. (The review should summarize what has already been discussed, not introduce brand-new critiques.)
This aligns with where performance management is going in many systems: less “annual event,” more regular conversations and continuous performance management.
A useful mental split:
When you blend them into one meeting, you get fear, defensiveness, and vague feedback.
You don’t need a fancy HR dashboard to see the cost. You see it in behavior.
When feedback is valuable, it correlates with better outcomes. Gallup and Workhuman report that employees who strongly agree they receive valuable feedback are much more likely to be engaged and less likely to be burned out or job hunting.
If your review process produces the opposite experience, you are effectively funding disengagement.
Once employees think ratings are political, every pay decision becomes a negotiation.
Then managers get pulled into side conversations:
The hidden cost is management time and resentment.
A bad review cycle creates:
You lose weeks, not hours.
These show up across industries, but I’ll call out how they tend to look in African workplaces, especially in fast-growing SMEs and mid-sized companies.
Annual-only reviews create surprise, anxiety, and memory bias. If you can only change one thing, change this.
If your competency list includes words like:
When managers do not keep evidence notes, the review becomes recency bias plus vibes.
If one person can tank someone’s rating without any checks, the system feels unsafe.
Calibration is where leadership or a panel checks ratings for consistency and bias across teams.
Without calibration, employees will compare across departments and assume favoritism.
If the review ends with “try harder,” employees feel judged, not guided.
Also, research on feedback effects is messy. A well-known line of work on feedback interventions highlights that feedback can sometimes backfire depending on how it’s designed and where attention gets directed.
So the fix is not “more feedback.” It’s better-designed feedback with clear next actions.
This is the approach I use when a company tells me: “We want structure, but we don’t want bureaucracy.”
Choose the primary purpose for this cycle:
You can do all four across the year, but if you try to do all four in one meeting, you get confusion.
Write it down in one sentence and put it at the top of the form.
Create simple evidence rules that managers must follow:
If you do this well, you will cut politics by half without any fancy tooling.
Minimum viable rhythm:
The point is not meeting frequency. The point is: no surprises.
Stop asking: “How is John doing?”
Ask:
For peer feedback, do not ask open-ended “tell us anything.” That’s where revenge and politics hide.
Instead, ask about specific interactions and observable outcomes.
Run a 60–90 minute calibration session:
This is the fairness layer most companies skip, and employees notice.
Deloitte’s research emphasizes how trust and perceptions of fairness shape whether performance management works.
Every review should end with:
This is where the review stops being “judgment day” and becomes “direction day.”
Before tools, three constraint acknowledgements (because real life exists):
Keep a running note per person. Five bullet points max per week.
If you do this consistently, quarterly reviews stop feeling like fiction writing.
Where Talstack can fit naturally: if you’re tired of tracking this in spreadsheets, a tool like Talstack Performance Reviews can hold the evidence, prompts, and outcomes in one place so the process does not depend on whoever still has the latest version of “Final_Final2.xlsx.” (I’ve seen that file. It has trauma.)
If you also run goal cycles, Talstack Goals helps you connect what someone committed to with what they delivered, and shows progress without chasing updates.
And if you want feedback culture without chaos, 360 Feedback can collect structured peer inputs with constraints and auditability, instead of free-form revenge essays.
Subject: Quick note on how reviews will work this quarter
Hi team,
This quarter’s review will not introduce surprises. If something is off track, you’ll hear it in our check-ins first.
For the review itself, I’ll base decisions on examples and outcomes, not general impressions. I’ll also share what “good” looks like for your role so we’re not guessing.
If you want to flag work I may have missed, send it before the meeting with links or artifacts.
Thanks.
Hi [Manager],
Before the review, I want to make sure I understand expectations clearly.
Here are 3 outcomes I believe I delivered this quarter (with links).
Can you confirm which of these mattered most, and what I should prioritize next cycle?
Also, if there’s any gap you’re seeing, I’d rather hear it directly with examples so I can fix it.
Peer feedback guidelines (please follow):
Quarterly works well for most growing companies because it reduces surprises and keeps decisions timely. If you can’t do quarterly, keep quarterly check-ins and do decisions twice a year.
Sometimes. Ratings can help when you need compensation and promotion decisions, but they can also increase politics if standards are vague.
If you keep ratings, anchor them with evidence and calibration. If you remove them, you still need clear decision rules or you just moved politics into a different room.
Then the design must be lighter, not more “complete.”
Use:
Also, track completion. If 40% of check-ins are skipped, your process is not real.
Bias never disappears, but you can reduce its impact by:
And remember: feedback design matters. Some feedback interventions can backfire depending on how they direct attention and how people receive them. (SciSpace)
Do not collect peer feedback without constraints. Use:
Require the dispute to reference:
No vague “this is unfair” emails. Those go nowhere and waste time.
After each review, assign one learning action:
If you use a platform, this is where Learning Paths, Assign Courses, and Competency Tracking become useful, because you can tie the review outcome to a real development plan and track completion over time.
Gallup’s point about valuable feedback is the north star here: people engage when feedback helps them grow, not when it feels like a ritual.
Pick one team and run the 30-day reset: monthly check-ins, evidence rules, structured prompts, and one calibration session. Then measure disputes and engagement sentiment before you roll it out company-wide.