The halo effect inflates ratings for popular employees and hides real gaps. Here is how to reduce it in performance reviews using evidence, structure, and 360 input.
Marketing Lead

April 8, 2026
•
6 Mins Read
The halo effect does not announce itself. It does not feel like bias when it is happening. It feels like accurate assessment.
A manager sits down to rate a member of their team. This person is excellent with clients. They are articulate, warm, and always composed in meetings. The manager thinks: "she is just really good." And so she gets a high rating on every competency, including the ones the manager has never actually observed.
That is the halo effect: one outstanding attribute generates positive ratings across unrelated dimensions of performance. It was named by psychologist Edward Thorndike in 1920 after his research on how military officers rated their subordinates. Over a hundred years later, it remains one of the most common and least-detected biases in performance appraisals.
This article explains how to spot it, why it is especially persistent in African team environments, and what HR and managers can do to reduce its influence.
The halo effect appears in different forms depending on the attribute doing the halos work.
According to Culture Amp's 2025 research on performance review bias, the halo effect causes raters to generalise from a single positive trait to an overall favourable evaluation without evidence of actual performance in unrelated areas. The damage is twofold: the high performer gets an inflated record that does not help them develop, and the manager misses genuine gaps that will eventually affect the team.
In many Nigerian, Kenyan, and Ghanaian organisations, several cultural and structural factors amplify the halo effect beyond what is described in Western HR literature.
The halo effect is most powerful when a manager rates all competencies at once. The first impression seeps into every subsequent rating.
The fix is structural: require managers to rate all employees on one competency at a time before moving to the next. Rate everyone on "goal delivery" before anyone on "collaboration." This forces separate evaluation and interrupts the halo carry-over.
A rating of "exceeds expectations" on any competency must be accompanied by a specific example: a project, a decision, a behaviour, a measurable outcome. Not "she is a strong collaborator" but "in the Q2 product launch, she coordinated the three-department handoff without a single escalation and the timeline held."
When managers know they will need to justify every rating with evidence, they slow down and evaluate each dimension on its own merits rather than carrying the glow of a strong first impression forward.
A manager who has a halo impression of an employee will rate them generously. A peer who has experienced the same person's poor planning or inconsistent follow-through will not. Incorporating peer feedback, direct report feedback, and self-assessment directly counters the single-rater halo.
According to Engagedly's research on halo and horns bias, 360-degree reviews mitigate the halo effect by gathering perspectives from multiple evaluators, producing a more balanced assessment than any single rater can provide. The multi-rater design does not eliminate bias, but it distributes it across enough perspectives that the overall picture becomes more accurate.
Talstack's 360 Feedback feature enables HR to run multi-rater feedback cycles continuously, anchored to specific behavioural competencies. Feedback is collected from peers, direct reports, and the manager in one cycle, and the outputs are visible to HR and the employee's line manager before ratings are finalised.
In calibration sessions, train the HR facilitator to ask the halo challenge question for every employee rated above expectations:
"If you removed this person's [strongest attribute] from the equation entirely, how would you rate their performance this cycle across each dimension?"
This question forces the manager to evaluate dimensions independently rather than in the shadow of the standout trait. It regularly surfaces meaningful gaps that the halo had masked.
The halo effect is hardest to counter at year-end when the manager's primary data is the impression they formed months ago. It is easiest to counter when check-in notes, goal tracking records, and peer feedback are accumulated throughout the year.
The manager who has twelve months of documented evidence about an employee is much less likely to rate purely from impression. The manager who is reconstructing the year from memory is rating the story their brain has already simplified.
Table: Halo effect vs. evidence-based rating (comparison)
The halo effect causes one positive trait to elevate ratings across all dimensions. The horns effect does the opposite: one negative trait or past mistake causes the manager to rate the employee poorly across the board, even where their actual performance is strong. Both are forms of the same cognitive error: letting one data point carry all the weight.
No, but it significantly reduces it. Peers who work with the employee daily have a different perspective from the line manager. They are less susceptible to the specific halo that the manager carries, though they can develop their own biases. Multiple data points from multiple raters produce a more balanced picture than any single source, making the overall rating more accurate.
It is more common in any environment where one highly visible skill dominates impressions: sales (strong closers get halos), communications (eloquent speakers get halos), and client services (personable employees get halos). In African financial services and fintech companies, where client-facing performance is highly visible to management, the halo effect around communication and relationship-building skills is particularly common.
The halo effect is not malicious. Managers who rate this way are not trying to be unfair. They are using cognitive shortcuts that feel like accurate assessment.
The only way to reduce it is to build evidence requirements and calibration into the rating process itself. A rating system that does not require evidence produces impressions, not assessments. A calibration session that does not challenge outliers produces agreement, not accuracy.
360 feedback is the most effective single change for organisations that can implement it well. Talstack's 360 Feedback module is designed to make this practical at any company size: HR sets up the rater groups, employees and peers respond, and the output is visible before the manager's rating is locked.