What is a competency-based performance review? Learn how it works, what to measure, common mistakes, and a practical rollout plan for Africa.
Marketing Lead

February 26, 2026
•
6 Mins read
You planned to close the laptop early, but a manager just sent you a performance review draft that says: “Great attitude, very committed, strong team player.”
No examples. No outcomes. No “what good looks like.”
And now you’re stuck. Because you still have to justify pay decisions, promotions, performance improvement plans, and sometimes exits, using language that would not survive a tough conversation with a CFO.
That’s the pain a competency-based approach is supposed to solve.
A competency-based performance review is a performance evaluation method where you assess how someone works (their observable behaviors, skills, and capabilities) against a defined set of competencies required for success in their role, level, or function.
Competencies are typically written as:
That “observable behavior” piece is not optional. Many public-sector and professional competency dictionaries explicitly define competencies this way and emphasize behavioral indicators as the performance “goal posts.”
This is where teams get confused fast:
A competency-based review does not replace goal reviews. It fills a different gap: goals can be unclear, moving, or dependent on external constraints. Competencies remain relevant even when the quarter is chaos.
If you want a version that actually works in real African organizations (tight time, uneven manager skill, inconsistent documentation), your “minimum viable” competency review has:
In many African teams, goals are real… but messy:
That means a pure goal-based review can punish people for things they didn’t control.
Competency-based reviews let you evaluate what the employee did control:
Competency-based reviews are especially useful when:
In regulated environments (financial services, healthcare, energy, telecom), competencies also help standardize “acceptable performance” across teams, which is why so many regulators and public-sector orgs formalize competency frameworks for performance management.
Here’s the part most competitor articles stay too polite about. Competency reviews can become a theater of “soft language” unless you design them tightly.
If your competency is “Be innovative” with no examples, you have created a disagreement machine.
Fix: define it and list what behaviors count as evidence.
If a manager can’t point to an incident, artifact, metric, or customer outcome, the rating is noise.
Fix: require 1–2 evidence bullets per competency.
I’ve seen review forms with 18 competencies. Managers stop reading. They give the same score across the board.
Fix: keep it to what truly predicts success in the role.
One strong trait (confidence, friendliness, eloquence) contaminates the entire review.
Fix: use behavioral anchors and ask for specific examples.
A practical method for anchoring ratings in behaviors is the logic behind behaviorally anchored rating scales (BARS), which are designed to reduce ambiguity by tying score levels to example behaviors.
Different managers use different internal scales.
Fix: run a calibration session (even a 45-minute one).
If the review ends with a rating and no plan, employees learn one thing: “This is for HR, not for me.”
Fix: connect competencies to development actions (coaching + training + stretch work).
Make three decisions before you write a single competency:
Constraint acknowledgement #1: if your managers are already stretched, don’t launch a heavy process with long forms. You will get “copy and paste” reviews and everyone will hate it.
Start from role success, not HR theory.
A clean way to build:
Suggested competency clusters (common across HR and ops teams):
For each competency, define what good looks like at different levels.
Example approach:
If you want a public example of this “levels + behavioral criteria” structure, many competency frameworks describe exactly that setup.
Your form should collect:
Constraint acknowledgement #2: if your documentation culture is weak, evidence prompts are not “nice to have.” They are your protection against future disputes.
You do not need a 3-hour workshop.
You need:
Calibration does two jobs:
Basic calibration agenda (45–60 minutes):
Constraint acknowledgement #3: if your culture has high power distance, people may avoid direct feedback. Calibration helps reduce the “everyone is excellent” problem by making standards explicit.
Competency-based reviews are strongest when they feed development quickly.
This is where tooling can matter.
In Talstack, teams usually connect:
The key is the workflow: review → gap → learning action → follow-up check-in. No drama, just iteration.
“I’m going to focus on two things: your outcomes this period, and the competencies that show how you work.
For each competency, I’ll share one example I observed. If you disagree, bring your own example.
My goal is simple: clear strengths, one growth focus, and a plan for the next cycle.”
“We’re adding competency-based performance reviews so expectations are clearer and feedback is more consistent across teams.
This is not about personality. It’s about observable behaviors that predict success in your role.
You’ll see definitions and examples in the form. You’ll also be asked to add evidence, so the process stays fair.”
“Can you give me one or two examples from this quarter that show why you chose that rating?
It can be a project, a document, a customer situation, or a metric. I just want to anchor it in something concrete.”
Not exactly.
Some frameworks explicitly separate behavioral and technical competencies, which is useful if you want both in the system.
Yes. Start with:
Then upgrade later. Most teams fail by trying to digitize too early without clarity.
For most roles: 6–10.
If you need more, it’s usually because your competencies are too broad or overlapping.
If you run OKRs, keep them.
Competencies tell you how someone works. OKRs tell you what they achieved.
In Talstack, teams typically run both by linking Goals to Performance Reviews, then using Competency Tracking to see whether the “how” is improving across cycles.
Three levers:
It should mean:
If “meets expectations” is unclear, your competency definitions are not finished.
Pick one job family (customer support, sales, HR, operations), define 8 competencies with behavioral indicators, and pilot the review with two managers before you roll it out company-wide.