BLOG

Why performance reviews fail (and how to fix them)

i

Article

Why performance reviews fail (and how to fix them)

Why performance reviews fail in Africa: bias, unclear goals, weak manager habits, and poor data. Here’s a practical fix you can run in 30 days.

Oba Adeagbo

Marketing Lead

Have you ever been in this situation? It's performance review season, and your laptop fan sounds like a small generator.

Not because of “strategy.” But because you have 14 performance reviews due by Friday, your laptop is working so hard, three managers chasing you on WhatsApp, and a spreadsheet with the tab named “Final-Final-2.”

In the middle of the night, you catch yourself trying to score someone’s “collaboration” based on two meetings you can barely remember.

That’s the moment you’ll start realizing that the process is not neutral.

Performance reviews fail for predictable reasons. The good news is that you can fix them without buying a complicated system or forcing managers to write essays.

Why performance reviews fail (what’s actually breaking)

Performance review is a structured conversation (plus documentation) meant to assess outcomes, improve performance, and support decisions like pay, promotion, or development.

That’s the ideal.

In real teams, the review becomes a stressful, end-of-year event where everyone acts like they have perfect memory, perfect metrics, and zero politics. The system collapses under normal human behavior.

A lot of global organizations have admitted the traditional approach is out of step with how work actually happens. Deloitte’s work on performance management redesign is one of the best-known examples of this shift toward frequent check-ins and coaching.

The hidden cost of a “normal” review cycle

Even when nobody is shouting, the process leaks money and trust:

  • Time cost: review forms, meetings, calibration sessions, follow-ups.
  • Execution cost: people work around the review instead of on the work.
  • Culture cost: employees stop being honest; managers stop coaching.
  • Talent cost: top performers quietly leave after a “unfair” rating.

And the worst part is that many teams do all that effort and still do not get better performance out of it. That mismatch is exactly why so many companies have tried to reinvent or reduce annual ratings.

The 7 failure modes that show up everywhere (and hit African teams harder)

I’ll keep this grounded in what HR leaders and ops managers in Africa typically deal with: fast growth, role ambiguity, uneven manager capability, documentation gaps, and real constraints around time and systems.

1) Unclear role outcomes and moving targets

If you cannot answer “what does good look like in this role?” in a few sentences, your review will become vibes.

Common signs:

  • Job descriptions exist, but they list tasks, not outcomes.
  • KPIs change mid-quarter, but nobody updates expectations.
  • “Hard work” becomes a metric because output isn’t defined.

Constraint acknowledgment #1: in many African companies, roles evolve quickly. People wear multiple hats. That’s normal. But your review must reflect it or it turns into opinion.

2) Infrequent feedback (the “12-month memory” problem)

Annual reviews force managers to summarize a year of work in one conversation. Humans cannot do that fairly.

You get:

  • Recency bias (the last two months dominate the story).
  • Surprise feedback (“I didn’t know this was an issue”).
  • Defensive employees and vague action plans.

There’s good research showing feedback interventions can help performance, but also that feedback can backfire depending on how it’s delivered and what it focuses on.

To put it simply, feedback is powerful, and sloppy feedback can do damage.

3) Rating bias and politics (halo, recency, leniency, similarity)

Bias is not a moral insult. It’s a design problem. If your system assumes people rate perfectly, your system is fragile.

Bias patterns that show up in reviews:

  • Halo effect: one strong trait “colors” everything else.
  • Recency effect: recent wins or mistakes dominate.
  • Leniency/severity: some managers rate everyone high, others rate everyone low.
  • Similarity bias: people who communicate like the manager get better scores.

Culture Amp’s bias breakdown is a practical overview HR teams often use to train managers.

For example, in higher power-distance contexts (common across many workplaces), employees may avoid challenging feedback or self-advocacy. Reviews then reward confidence, not competence.

4) Spreadsheet workflows and missing evidence

If the review process lives in spreadsheets and email threads, you lose evidence.

What happens:

  • Managers scramble for examples at the end.
  • Employees feel judged on memory, not facts.
  • HR spends time chasing submissions instead of improving quality.

This is not about “digitization” as a buzzword. It’s about making evidence easy to capture while the work is fresh.

5) Managers aren’t trained to coach, only to judge

Many managers in growing organizations were promoted for competence, not for coaching ability.

So the review becomes:

  • a lecture,
  • a negotiation,
  • or an awkward silence with a rating at the end.

Performance management should shift toward coaching and frequent conversations.

Another thing to note is that managers are overloaded. If your fix requires them to double their admin work, it will fail. The workflow has to be lighter than what you have now.

6) Calibration becomes a negotiation, not a quality check

Calibration can be useful, but in many companies it turns into:

  • managers defending their favorites,
  • HR trying to enforce “distribution,”
  • leadership using ratings to manage payroll, not performance.

When that happens, employees quickly learn the rating is about budget, not merit.

7) Reviews don’t connect to development, so nothing changes

This is the quiet killer.

People do a review, sign a form, and go right back to the same skill gaps.

In South Africa, research has examined how managerial competencies and appraisal practices relate to SME performance, reinforcing the idea that capability and the quality of management practices matter, not just the existence of a process.

If reviews do not lead to training, coaching, or role clarity, you are running a ceremony.

What “good” looks like (without turning your company into an HR museum)

A strong performance system has three characteristics:

  1. Clear expectations: role outcomes that can be observed.
  2. Frequent course correction: short check-ins, not one annual surprise.
  3. Evidence-based decisions: examples, not personality.

Separate decisions from development (even if you cannot fully split them)

In many African companies, compensation is tightly linked to reviews. You may not be able to separate pay from performance conversations completely.

But you can separate the inputs.

A practical approach:

  • Use monthly or quarterly check-ins for coaching and development.
  • Use a shorter, structured cycle for pay/promotion decisions.
  • Keep the evidence trail consistent across both.

A simple review architecture that works in real workplaces

If your organization is between 30 and 1,000 employees, this structure tends to hold up:

  • Monthly check-in (15 minutes): priorities, blockers, support needed.
  • Quarterly review (45 minutes): outcomes, strengths, growth areas, next-quarter goals.
  • Twice-yearly decision point: compensation/promotion decisions, using evidence from the year.

Where 360 feedback helps, and where it can backfire

360 feedback can be excellent for:

  • leadership roles,
  • cross-functional collaboration,
  • behaviors that peers observe more than managers do.

It can backfire when:

  • trust is low,
  • anonymity is broken,
  • feedback is used as a weapon.

So treat 360 as a development tool first, not a courtroom.

A practical fix you can run in 30 days

This is designed for your constraints: unclear KPIs, busy managers, and imperfect systems.

Step 0: Decide what this review cycle is for

Pick one primary objective:

  • Performance improvement
  • Fair decisions (pay/promo)
  • Skills development
  • Compliance/documentation

If you try to optimize for all four in one form, you’ll get none.

Write the objective at the top of the review doc. It sounds small. It changes behavior.

Step 1: Rewrite performance into 4–6 observable outcomes per role

Do this for your highest-impact roles first (sales, operations, customer support, engineering, finance).

For each role, define:

  • 2–3 output outcomes (what gets delivered)
  • 1–2 quality outcomes (how well it’s done)
  • 1 collaboration/behavior outcome (how work gets done)

Example (Operations Lead):

  • On-time delivery rate maintained above X
  • Stockouts reduced by Y
  • Incident response time within Z
  • Weekly reporting accuracy and timeliness
  • Cross-team handoffs documented and followed

If your KPIs are unclear, start with outcomes and proxy measures. You can tighten later.

Step 2: Add lightweight monthly check-ins (15 minutes)

Monthly beats quarterly when your environment changes fast.

Agenda:

  • What did you ship since last check-in?
  • What’s blocked?
  • What’s the one priority before the next check-in?
  • What support do you need from me?

Keep notes. Short notes win.

Step 3: Collect evidence as you go (not at the deadline)

Create an “evidence log” for each employee:

  • wins (with links, numbers, or examples)
  • misses (what happened, what changed)
  • feedback from stakeholders (short, specific)

This is where many reviews become fair.

Talstack’s Performance Reviews module and Goals feature are built for this exact “evidence over memory” problem, especially when you’re trying to escape spreadsheet chaos. You define goals, collect self/peer/manager feedback, and see progress without hunting through email threads. 

Step 4: Reduce bias with anchors, examples, and a simple calibration

Do three things:

  1. Behavioral anchors: define what “meets expectations” looks like with examples.
  2. Require two evidence points per rating: one output, one behavior.
  3. Calibration for outliers only: do not calibrate the whole company. Start with the top and bottom 10–15%.

Also, train managers on common biases. 

Step 5: Turn the review into a development plan with training attached

Every review should end with:

  • one strength to double down on,
  • one skill gap,
  • one next-quarter goal tied to that skill gap.

Then attach learning.

This is where platforms help, but you can do it manually too:

  • Assign one internal mentor
  • Share one internal SOP
  • Assign one external course

If you want to operationalize this, Talstack’s Learning Paths and Assign Courses features make it easier to connect “review feedback” to “training that closes the gap,” then actually track completion and improvement over time. 

Step 6: Measure what changed (so it doesn’t die next quarter)

Pick 3 metrics:

  • review completion rate (by manager)
  • quality check: % of reviews with evidence attached
  • follow-through: % of employees with a development action completed in 30–60 days

Talstack’s Analytics capability is designed for exactly this kind of visibility: response rates, goal attainment, learning engagement, completion. When you measure it, the process stays alive.

Failure mode → Fix table (use this in your next HR meeting)

Why performance reviews fail What it looks like in your company Practical fix you can implement this month
Unclear expectations "He works hard" becomes the metric Define 4–6 role outcomes; rewrite the form to match
Infrequent feedback Surprises, recency bias, defensive meetings Monthly 15-minute check-ins with short notes
Bias in ratings Favorites win, quiet performers lose Evidence requirement + bias training + light calibration
Spreadsheet chaos Missing history, version control issues Centralize reviews and evidence logs in one place
Manager skill gap Reviews feel like judgment, not coaching Give managers a short script + coaching micro-training
No link to development Same gaps repeat every cycle End every review with a learning plan and deadlines
No measurement HR runs a ceremony, not a system Track completion, evidence quality, follow-through

Copy-paste scripts (modify for your use case)

1) Monthly check-in opener (manager)

“Quick check-in. I want to make sure you’re clear on priorities and you’re not blocked. What did you move forward since our last chat, and what’s stuck?”

2) Evidence-based feedback (manager)

“I’m going to describe what I observed, the impact, and what I want to see next.
In the last two weeks, the client handoff notes were missing in three cases. It slowed delivery and created rework. Next month, I need handoff notes submitted same day, even if they’re short. What’s making that hard right now?”

3) Self-review prompt (HR to employees)

“For your self-review, list: (1) your top 3 outcomes with evidence, (2) one challenge you handled, (3) one skill you want to build next quarter.”

4) Pay conversation boundary (manager)

“We’re going to talk about growth and performance today. Compensation decisions are handled in the next step with leadership after calibration. I’ll share the outcome and the rationale when it’s finalized.”

5) Closing the review with a plan (manager)

“Here’s what I’m taking from this: your strength is ____. Your main growth area is ____. Next quarter, we’ll focus on ____. By next Friday, let’s agree on one goal and one learning action.”

Quick Checklist (print this)

  • The review objective is written at the top (decision, development, or both)
  • Each role has 4–6 observable outcomes
  • Managers are running monthly 15-minute check-ins
  • Every rating has at least two evidence points
  • Bias reminders are included (halo, recency, leniency, similarity)
  • Calibration is limited to outliers and role-critical teams
  • Every review ends with a development plan (one skill, one action, one deadline)
  • HR tracks completion, evidence quality, and follow-through

FAQs

Why performance reviews fail even when we have KPIs?

Because KPIs alone don’t solve bias, missing evidence, and weak manager coaching habits. Also, many KPIs are lagging or team-based, so individuals get judged on outcomes they didn’t fully control. Add evidence logs and short check-ins to make KPIs usable. 

Should we remove ratings completely?

Not always. Some organizations function better with ratings for pay and promotion decisions. The bigger issue is rating design and usage. If you keep ratings, require evidence and separate development conversations from compensation decisions where possible. 

How often should we do performance reviews in a fast-moving company?

If priorities shift monthly, do monthly check-ins and quarterly reviews. Annual-only reviews are usually too slow for fast-changing environments and amplify recency bias. 

What’s the simplest way to reduce bias in reviews?

Two rules: use behavioral anchors, and require evidence for every rating. Then do a short calibration for outliers only. Bias education helps, but design changes help more.

Do 360 reviews work in African workplaces?

They can, but only when trust is present and anonymity is protected. Start with development-only 360 feedback for managers and leaders, then expand slowly.

What if managers say they don’t have time?

They don’t. That’s why the fix is short check-ins and lightweight notes, not longer forms. If your new system adds admin work, it will fail. Make the process lighter than what you have now. 

How do we connect reviews to training without a big L&D budget?

Use internal SOPs, peer coaching, small project assignments, and a short curated course list. A structured learning path beats a big library nobody uses. Research on training and performance in African public sector contexts also reinforces the need for structured approaches, not ad hoc learning.

How do we know the new system is working?

Track three things: completion rate, evidence quality, and follow-through on development actions. If those improve, fairness and performance usually improve next. 

One next step

Pick one department and run the 30-day fix as a pilot: rewrite role outcomes, start monthly check-ins, and enforce evidence-based ratings. After 30 days, keep what worked and tighten what didn’t.

Related posts

i

Continuous performance management vs annual reviews

i

Article

The Performance Review Process Step-by-Step (for HR)

i

Article

The performance review process step-by-step (for Managers)

Article

How Talstack is Transforming Employee Engagement and Productivity

18 January, 2024 • 5 Mins read

News

Talstack Launches Innovative People Management Solutions

18 January, 2024 • 5 Mins read

News

Talstack is Redefining Employee Engagement and Performance

18 January, 2024 • 5 Mins read