BLOG

Why employees hate performance reviews (and how to fix it)

i

Article

Why employees hate performance reviews (and how to fix it)

Why employees hate performance reviews: the real reasons (trust, bias, vague feedback) and a practical fix you can run in 30 days for African teams.

Oba Adeagbo

Marketing Lead

March 24, 2026

7 Mins Read

If you’re an HR leader, you must have sat in the room where an employee is staring at the rating scale like it’s a court judgment.

They are not upset because they “hate accountability.”

They are upset because the review feels random, late, and weirdly personal. And if you run teams in Africa, you’ve probably seen the extra layer: hierarchy, politics, and low documentation make the “random” feel even more dangerous.

Here’s what’s usually underneath the hate.

The “surprise meeting” problem

Employees can handle tough feedback. They struggle with surprise feedback.

If your first serious conversation about performance happens in a quarterly or annual review, you have already lost trust. People interpret the review as punishment, not development.

The “ratings are political” problem

When ratings decide pay, promotion, or who gets cut, people assume politics.

Sometimes they are right.

Sometimes it’s not politics, it’s just bad process: unclear standards, one manager’s memory, and no calibration. Either way, the employee experiences it as “political.”

The “feedback isn’t useful” problem

People do not hate feedback. They hate low-value feedback.

Gallup describes a “feedback reputation problem” where employees want less frequent feedback because what they get is unclear, critical, or “going through the motions.” 

That’s brutal, because frequent feedback can be linked to engagement when it is done well. 

The “my manager didn’t watch my work” problem

In many companies, managers have too many direct reports, too many projects, and too little structured tracking.

So the review becomes a memory contest. Whoever is most recent, most visible, or most confident wins.

The “HR wants forms, not growth” problem

This one hurts, because HR often didn’t design it to be annoying.

But if the review experience is mostly:

  • filling forms
  • chasing signatures
  • arguing about scores
  • no clear development plan afterward

…employees don’t feel supported. They feel processed.

What a performance review should be

A performance review should be a decision meeting.

It answers:

  • What level of performance was achieved relative to expectations?
  • What compensation or role decisions follow from that?
  • What needs to change next cycle?

Feedback, coaching, and growth conversations should happen outside the review, in regular check-ins. (The review should summarize what has already been discussed, not introduce brand-new critiques.)

This aligns with where performance management is going in many systems: less “annual event,” more regular conversations and continuous performance management.

A useful mental split:

  • Coaching conversations = improve future performance
  • Review decisions = document past performance and make calls

When you blend them into one meeting, you get fear, defensiveness, and vague feedback.

Why it matters (what poor reviews cost you)

You don’t need a fancy HR dashboard to see the cost. You see it in behavior.

Attrition and disengagement

When feedback is valuable, it correlates with better outcomes. Gallup and Workhuman report that employees who strongly agree they receive valuable feedback are much more likely to be engaged and less likely to be burned out or job hunting. 

If your review process produces the opposite experience, you are effectively funding disengagement.

Pay and promotion credibility

Once employees think ratings are political, every pay decision becomes a negotiation.

Then managers get pulled into side conversations:

  • “Why did she get more?”
  • “What do I need to do to be promoted?”
  • “So the work I did in March doesn’t count?”

The hidden cost is management time and resentment.

Manager time, rework, and disputes

A bad review cycle creates:

  • rebuttals
  • appeals
  • HR mediation
  • back-and-forth email chains

You lose weeks, not hours.

Common mistakes that create review hatred

These show up across industries, but I’ll call out how they tend to look in African workplaces, especially in fast-growing SMEs and mid-sized companies.

1) Annual-only reviews

Annual-only reviews create surprise, anxiety, and memory bias. If you can only change one thing, change this.

2) Vague competencies and “good attitude” scoring

If your competency list includes words like:

  • “professionalism”
  • “ownership”
  • “communication”
    …but has no behavioral anchors, you are inviting interpretation fights.

3) No evidence, just memory

When managers do not keep evidence notes, the review becomes recency bias plus vibes.

4) One rater holds all power

If one person can tank someone’s rating without any checks, the system feels unsafe.

5) No calibration

Calibration is where leadership or a panel checks ratings for consistency and bias across teams.

Without calibration, employees will compare across departments and assume favoritism.

6) Feedback without next actions

If the review ends with “try harder,” employees feel judged, not guided.

Also, research on feedback effects is messy. A well-known line of work on feedback interventions highlights that feedback can sometimes backfire depending on how it’s designed and where attention gets directed.
So the fix is not “more feedback.” It’s better-designed feedback with clear next actions.

Step-by-step fix (a 30-day reset you can run this quarter)

This is the approach I use when a company tells me: “We want structure, but we don’t want bureaucracy.”

Step 0: Decide what the review is for (pick one primary purpose)

Choose the primary purpose for this cycle:

  • Compensation decisions
  • Promotion decisions
  • Performance improvement
  • Development planning

You can do all four across the year, but if you try to do all four in one meeting, you get confusion.

Write it down in one sentence and put it at the top of the form.

Step 1: Replace “opinions” with evidence rules

Create simple evidence rules that managers must follow:

  • Every rating needs 2–3 concrete examples.
  • Examples must include context (project, timeline, constraints).
  • Examples must show impact (what changed because of the work).
  • Avoid personality labels (lazy, rude, stubborn). Use observable behavior.

If you do this well, you will cut politics by half without any fancy tooling.

Step 2: Split the cycle into check-ins + decision day

Minimum viable rhythm:

  • 15–20 minute monthly check-in
  • 60-minute quarterly review
  • annual summary only if needed for pay planning

The point is not meeting frequency. The point is: no surprises.

Step 3: Use structured prompts that force specificity

Stop asking: “How is John doing?”

Ask:

  • What should this person keep doing because it clearly works?
  • What should they stop doing because it creates friction or risk?
  • What should they start doing to hit the next level?

For peer feedback, do not ask open-ended “tell us anything.” That’s where revenge and politics hide.

Instead, ask about specific interactions and observable outcomes.

Step 4: Calibrate for fairness (lightweight, not painful)

Run a 60–90 minute calibration session:

  • HR facilitator + department heads
  • Review outliers (highest and lowest ratings)
  • Ask: “What evidence supports this rating?”
  • Check for patterns (one manager rates everyone low, one team rates everyone high)

This is the fairness layer most companies skip, and employees notice.

Deloitte’s research emphasizes how trust and perceptions of fairness shape whether performance management works.

Step 5: Close the loop with development actions

Every review should end with:

  • 1 performance target (next 30–90 days)
  • 1 capability to build (skill or behavior)
  • 1 support action from the manager (resource, coaching, unblocking)
  • 1 learning action (course, assignment, shadowing)

This is where the review stops being “judgment day” and becomes “direction day.”

A practical table: why reviews fail and what fixes them

What employees hate What it causes Fix that actually works What to track
Surprise feedback Defensiveness, distrust, “this is unfair” Monthly check-ins + quarterly summary Check-in completion rate
Vague ratings Politics, arguments, appeals Behavior anchors + 2–3 evidence examples per rating % of ratings with evidence notes
One-person power Fear, silence, favoritism claims Calibration panel + manager justification rules Outlier ratings reviewed
Low-value feedback Employees stop listening Structured prompts: keep/stop/start + next 30-day action Action-plan completion
No growth path Disengagement, resignations Tie review to skills + learning plan Skill growth milestones

Tools: checklist + scripts + examples

Before tools, three constraint acknowledgements (because real life exists):

  • Time constraint: Your managers are busy. If you design a process that needs 2 hours per person every month, it will die quietly by week three.
  • KPI constraint: Some roles have unclear metrics (operations, customer success, admin). You still need evidence rules and examples.
  • Documentation constraint: In many African companies, work is discussed in WhatsApp, calls, and hallway chats. If you don’t capture evidence in a lightweight way, memory will run the show.

A simple evidence note template (manager uses weekly)

Keep a running note per person. Five bullet points max per week.

  • Project / task:
  • What happened:
  • Evidence (link, message, output):
  • Impact:
  • Follow-up needed:

If you do this consistently, quarterly reviews stop feeling like fiction writing.

Example: a lightweight quarterly review design (50–300 staff)

  • Inputs: manager notes, employee self-summary, 2 peer inputs (optional)
  • Meeting: 45–60 minutes
  • Outputs: rating (if you use them), pay/promo recommendation (if needed), 30–90 day plan

Where Talstack can fit naturally: if you’re tired of tracking this in spreadsheets, a tool like Talstack Performance Reviews can hold the evidence, prompts, and outcomes in one place so the process does not depend on whoever still has the latest version of “Final_Final2.xlsx.” (I’ve seen that file. It has trauma.)

If you also run goal cycles, Talstack Goals helps you connect what someone committed to with what they delivered, and shows progress without chasing updates.

And if you want feedback culture without chaos, 360 Feedback can collect structured peer inputs with constraints and auditability, instead of free-form revenge essays.

Copy-paste scripts

Script 1: Manager sets the tone before the review cycle

Subject: Quick note on how reviews will work this quarter

Hi team,
This quarter’s review will not introduce surprises. If something is off track, you’ll hear it in our check-ins first.

For the review itself, I’ll base decisions on examples and outcomes, not general impressions. I’ll also share what “good” looks like for your role so we’re not guessing.

If you want to flag work I may have missed, send it before the meeting with links or artifacts.

Thanks.

Script 2: Employee asks for clarity without sounding defensive

Hi [Manager],
Before the review, I want to make sure I understand expectations clearly.

Here are 3 outcomes I believe I delivered this quarter (with links).
Can you confirm which of these mattered most, and what I should prioritize next cycle?

Also, if there’s any gap you’re seeing, I’d rather hear it directly with examples so I can fix it.

Script 3: HR sets rules for peer feedback to reduce politics

Peer feedback guidelines (please follow):

  • Focus on direct collaboration you’ve had in the last 90 days.
  • Describe observable behavior and impact, not personality.
  • Include one suggestion the person can act on within 30 days.
  • Do not mention pay, promotion, or personal disputes.

Quick Checklist (use this the week before reviews)

  • Review purpose is written in one sentence (pay, promotion, improvement, development)
  • Every rating requires 2–3 evidence examples
  • Managers have at least 8–12 evidence notes per person for the quarter
  • Employee self-summary collected (max 1 page)
  • Calibration session scheduled for outliers and inconsistencies
  • Every review ends with a 30–90 day plan (1 target, 1 skill, 1 support action)
  • HR has a dispute path (clear timeline and criteria)

FAQs

How often should we do performance reviews?

Quarterly works well for most growing companies because it reduces surprises and keeps decisions timely. If you can’t do quarterly, keep quarterly check-ins and do decisions twice a year.

Should we remove ratings?

Sometimes. Ratings can help when you need compensation and promotion decisions, but they can also increase politics if standards are vague.

If you keep ratings, anchor them with evidence and calibration. If you remove them, you still need clear decision rules or you just moved politics into a different room.

What if managers do not have time?

Then the design must be lighter, not more “complete.”

Use:

  • 15-minute monthly check-ins
  • 5-minute weekly evidence notes
  • one quarterly decision meeting

Also, track completion. If 40% of check-ins are skipped, your process is not real.

How do we reduce bias?

Bias never disappears, but you can reduce its impact by:

  • evidence requirements
  • structured prompts
  • calibration
  • using multiple data points (manager + self + limited peer)

And remember: feedback design matters. Some feedback interventions can backfire depending on how they direct attention and how people receive them. (SciSpace)

How do we handle peer feedback safely?

Do not collect peer feedback without constraints. Use:

  • specific questions
  • limited reviewer count
  • time window (last 90 days)
  • HR moderation rules

What do we do when someone disputes the result?

Require the dispute to reference:

  • specific evidence
  • the stated expectations for the role
  • what decision the employee is requesting (clarification, re-rating, development plan)

No vague “this is unfair” emails. Those go nowhere and waste time.

How do we link reviews to learning and growth?

After each review, assign one learning action:

  • a course
  • a project stretch
  • shadowing

If you use a platform, this is where Learning Paths, Assign Courses, and Competency Tracking become useful, because you can tie the review outcome to a real development plan and track completion over time.

Gallup’s point about valuable feedback is the north star here: people engage when feedback helps them grow, not when it feels like a ritual.

One next step

Pick one team and run the 30-day reset: monthly check-ins, evidence rules, structured prompts, and one calibration session. Then measure disputes and engagement sentiment before you roll it out company-wide.

Related posts

i

Article

How to Set Performance Expectations That Match Ratings

April 11, 2026

6 Mins Read

i

Article

How to Discuss Ratings Without Demotivating Employees

April 10, 2026

6 Mins Read

i

Article

How to Reduce Favouritism in Performance Reviews

April 9, 2026

5 Mins Read

Article

How Talstack is Transforming Employee Engagement and Productivity

18 January, 2024 • 5 Mins read

News

Talstack Launches Innovative People Management Solutions

18 January, 2024 • 5 Mins read

News

Talstack is Redefining Employee Engagement and Performance

18 January, 2024 • 5 Mins read