Train managers to rate fairly: the biases that skew ratings in African companies, a practical calibration approach, and the questions that force evidence over opinion.
Marketing Lead

April 7, 2026
•
6 Mins Read
Every performance review system has one critical vulnerability: the manager.
The form can be well-designed. The criteria can be clear. The timeline can be communicated properly. And the final rating can still be completely wrong because the manager walked in with a story about the employee and rated the story instead of the performance.
Training managers to rate fairly is not about removing their judgment. It is about giving them better tools to distinguish their actual observation of work from the unconscious shortcuts their brain takes to make judgment easier. This article covers the biases that appear most often in African company performance reviews, a practical calibration approach, and the specific questions HR can use to create accountability without creating a hostile process.
Rating bias is not a character flaw. It is a feature of how human cognition works under uncertainty. When a manager does not have clear evidence about an employee's performance, their brain fills the gap with pattern-matching: they rate based on how likeable the employee is, how visible their work has been, how similar the employee's style is to the manager's own.
Research consistently shows that untrained raters inflate ratings for people they like, deflate them for people they dislike, and cluster most employees in the middle to avoid conflict. According to a 2025 analysis by Engagedly, using a consistent, well-defined evaluation process is six times more effective than relying on subjective manager judgment alone.
In Nigerian and Ghanaian companies specifically, three additional dynamics compound standard rating bias:
One outstanding quality causes the manager to rate the employee positively across all dimensions. The employee who gives excellent presentations gets rated highly on teamwork, planning, and reliability even without evidence. Named by psychologist Edward Thorndike in 1920 and still one of the most common errors in performance appraisals today.
The inverse of halo. One mistake or negative trait causes the manager to discount positive performance across the board. The employee who missed one critical deadline in February gets rated below expectations in July, even if every other delivery since February was on time.
Managers weight the last four to six weeks of performance more heavily than the full review period. An employee who performed at a high level for nine months and had a difficult final quarter will often receive a mid-range rating that their full-year performance does not support.
Many managers rate everyone in the middle to avoid difficult conversations at either extreme. Rating everyone "meets expectations" feels fair but is actually a failure of observation. It makes it impossible to differentiate genuine high performers from adequate ones, which frustrates the people who deserve to be recognised.
Managers rate employees who share their background, communication style, or work habits more favourably. In diverse African teams where a Lagos-based manager may be rating employees from Kano, Ibadan, or Abuja with different communication norms, this bias can look like cultural preference but is actually a fairness failure.
Some managers rate everyone generously to avoid conflict or to appear supportive. The result is rating inflation: everyone looks like a high performer, decisions about promotions and development investment become impossible, and genuine high performers feel undervalued when they realise their outstanding rating means the same as everyone else's.
This is a practical three-step framework for training managers before each review cycle. It can be delivered as a 90-minute session, either in person or remote.
Before a manager writes a single rating, they must compile evidence. The rule is simple: no evidence, no rating.
Evidence includes: goal completion data, project outcomes with specific results, documented examples from check-in notes, peer feedback summaries, and direct observations the manager recorded during the cycle.
The training exercise: give managers a blank rating form and ask them to fill it out right now, using only what they can prove. Most will discover within five minutes that they have very little concrete evidence. That realisation is the training.
Anchor ratings to observable behaviour, not personality. The distinction is critical.
The wrong version: "Taiwo is not proactive."
The right version: "Taiwo has been asked twice by the team lead to flag project risks before they escalate. Both times, the risk was raised after the escalation had already happened."
The SBI model from the Center for Creative Leadership gives managers the structure they need: Situation (when did this happen?), Behaviour (what specifically did the person do or not do?), Impact (what was the result?). Train managers to use SBI for every rating they give below "meets expectations."
Calibration is where HR prevents one manager's standards from becoming the organisation's problem. It is a structured group review of ratings before they are finalised.
Format: HR facilitates a 90-minute session where managers from across departments review their ratings for comparable roles. HR asks two questions for every outlier rating:
Calibration does not require everyone to agree. It requires everyone to defend their ratings with evidence. That requirement, consistently applied, reduces leniency bias, halo effect, and recency bias more effectively than any individual training session.
Use this in every calibration session when a rating feels inconsistent:
"I want to make sure I understand this rating. Can you give me one specific example from this cycle, not from previous cycles or general impressions, that best demonstrates why this person is rated [above / below] expectations? A project, a decision, a conversation, anything concrete."
If the manager cannot answer the question, the rating is not ready. Ask them to review their check-in notes and return with evidence.
Most African HR teams cannot run a two-day manager training programme before each review cycle. The training needs to fit inside existing manager meetings.
Talstack's Performance Reviews module supports calibration by giving HR a dashboard view of rating distributions across managers and departments. When one manager rates 90% of their team as "above expectations" while another rates 80% as "meets expectations," the anomaly is visible before it damages trust.
Resistance usually comes from two places: managers who believe they are already fair and do not need training, and managers who feel the evidence requirement is burdensome. For the first group, the rating exercise (fictional employee descriptions) is the most effective tool. It makes the bias visible without making it personal. For the second group, the message is simpler: evidence protects you. If an employee disputes their rating, a manager with documented examples wins the conversation. A manager without them does not.
Three to five categories is the most useful range. Three categories reduce the ability to differentiate. More than five create grade inflation problems because managers worry about the consequences of any extreme rating. Most African companies using continuous performance management effectively use four: below expectations, approaching expectations, meets expectations, exceeds expectations.
Calibration is a structured manager discussion to align rating standards before reviews are finalised. It matters because different managers apply different standards to the same rating scale, which makes ratings across teams incomparable and creates perceived unfairness. A calibration session surfaces and corrects those inconsistencies before they reach employees.
Fair ratings do not happen by accident. They happen because HR built a system that requires evidence, trains managers to recognise their own biases, and runs calibration before ratings are locked.
The investment is small: two hours of pre-cycle training and a 90-minute calibration session. The return is a performance review system that employees trust, which is the only version of performance management that produces engagement instead of resentment.