BLOG

Peer reviews: how to select reviewers

i

Article

Peer reviews: how to select reviewers

How to select reviewers in Peer reviews without bias: a practical rater mix, guardrails, scripts, and a lightweight selection process HR can run fast.

Oba Adeagbo

Marketing Lead

March 5, 2026

6 Mins read

You’re about to run peer reviews and someone pings you: “Can I pick my reviewers?”
Five minutes later, another message: “Please don’t add Chinedu, he doesn’t like me.”
This is the moment peer reviews either become useful data or office folklore.

If you want peer feedback you can actually use, the fastest win is How to select reviewers in Peer reviews with a repeatable rater mix and a few non-negotiable guardrails.

How to select reviewers in Peer reviews (without turning it into a popularity contest)

Peer review “reviewers” are simply the people who see someone’s work up close enough to comment on behaviors and outcomes, not personality.

That sounds obvious until you’re selecting raters in a small team where everyone is friends with everyone, or everyone is tired, or both.

Why it matters: the reviewer list quietly decides everything that happens next. Coaching plans. Promotions. Who gets labeled “difficult.” Who quietly stops speaking in meetings.

A solid selection process reduces bias, increases credibility, and keeps your best people from feeling like the whole thing is a social ranking exercise.

The failure modes I see most often when picking reviewers

A lot of peer review problems are not “feedback problems.” They are selection problems.

Here are the common mistakes that create politics fast:

  1. Letting people choose only their friends.
    You get glowing feedback with zero signal. The person feels great, the manager feels confused.
  2. Using the org chart instead of work relationships.
    “Same level” is not the same as “same work.” Peers who never collaborate can only rate vibes.
  3. Picking reviewers based on loud opinions.
    The most vocal teammate becomes the unofficial judge of performance.
  4. Too few raters to protect confidentiality.
    If feedback is traceable, people either sugarcoat or retaliate. Confidentiality becomes theatre.
  5. No rules for conflicts or “known beef.”
    If there’s an unresolved dispute, peer feedback can become the continuation of that dispute.
  6. Treating peer reviews like a one-time event.
    With no ongoing documentation, reviewers rely on recent memory and workplace gossip.
  7. No rater briefing.
    People rate personality, tone, “energy,” or whether someone replies fast on WhatsApp. That’s how you get nonsense.

A useful anchor from research and practice guidance: multi-rater systems need clarity on who gives feedback, what they rate, and who sees the results. The Institute for Employment Studies calls out basics like briefing raters and being clear about who will see the feedback.

Early warning signs you picked the wrong people

  • Reviewers mention attitude more than behaviors.
  • Feedback is overly extreme (all perfect or all terrible).
  • Comments focus on “team spirit” but can’t name work examples.
  • Multiple raters repeat the same phrase (it can signal coordination).
  • People ask, “Will they know it was me?”

If you’re hearing these signals before the cycle even starts, fix selection before you touch the form.

A step-by-step process for selecting reviewers you can defend

This is the process I use when I need peer reviews that hold up in a promotion meeting and still feel fair to employees.

Step 0: Set the rules before names show up

Write down your rules in plain language. Two reasons:

  • It prevents negotiation-by-DM.
  • It protects HR and managers from “special cases” that quietly bias the data.

Minimum rules I recommend:

  • Every person gets a balanced rater mix (not just peers).
  • No one can pick only reviewers who report to them, live with them, or are in a close personal relationship.
  • If there’s an active conflict, HR reviews the rater list.
  • Confidentiality thresholds apply (more on that below).

The USC Center for Effective Organizations has pointed out a predictable risk in 360-style systems: people try to select favorable raters, which undermines accuracy. You need guardrails for that.

Step 1: Map the person’s “work surface area”

Before you list names, list where the work shows up.

Ask:

  • Who receives this person’s outputs?
  • Who depends on their turnaround time?
  • Who sees their collaboration under pressure?
  • Who is impacted by their mistakes?

This is how you avoid picking reviewers who “like the person” but don’t actually rely on the work.

A quick method I use: make 4 buckets.

  • Upstream: who gives inputs to this person
  • Same-stream: who collaborates day to day
  • Downstream: who receives outputs
  • Cross-functional: who works with them when things break

Step 2: Build a balanced rater slate

A strong peer review set is usually 6–10 people depending on org size and role. The point is coverage, not volume.

A practical mix:

  • Manager: 1
  • Peers (same function): 2–3
  • Cross-functional partners: 1–3 (finance, product, sales, compliance, ops depending on role)
  • Direct reports (if the person manages): 2–4
  • Internal customers (optional): 1–2 if the role is service-heavy

The Institute for Employment Studies notes that validity improves when you have multiple returns, and mentions a “minimum number of raters” for validity in their guidance.


(Reality check: in smaller African companies, you may not hit ideal numbers for every role. If you cannot, compensate with stronger behavioral questions and manager evidence.)

Step 3: Put guardrails on self-selection (without insulting adults)

I like partial input, not full control.

Option A: Employee nominates, HR approves and adjusts

  • Employee proposes 8–12 names
  • HR/manager selects final 6–10 using rules

Option B: Manager proposes, employee flags conflicts

  • Manager proposes slate
  • Employee can flag conflicts with a short explanation
  • HR makes final call

This reduces “friend-only” lists while still giving the employee psychological safety.

Step 4: Sanity-check for conflicts, retaliation risk, and “friend groups”

This is where you prevent the worst outcomes.

Run a quick check:

  • Any open disputes, grievances, or disciplinary issues?
  • Any romantic or family ties?
  • Any “same clique” dominance (all reviewers from one social circle)?
  • Any high power imbalance (e.g., junior rating senior without anonymity protections)?

If retaliation risk is high, do not force direct peer-to-peer exposure. Use aggregation, higher confidentiality thresholds, or switch to manager-led evidence review for that cycle.

FEMA’s 360 guidance frames multi-rater feedback as a “360-degree perspective” and emphasizes careful administration. In practice, administration includes managing rater risk and confidentiality, not just sending links.

Step 5: Lock confidentiality and minimum rater counts

This is the part many teams skip, then act surprised when comments become vague.

Two simple rules:

  • Do not show individual peer names next to comments.
  • Only show aggregated results when you have enough raters in that category.

Many organizations use thresholds (often 3+ per rater group) to reduce identifiability. If your team is tiny, you may need to combine categories (e.g., “peers and partners”) so no single person is exposed.

The IES guide explicitly emphasizes being clear about who sees the feedback and briefing raters. That clarity is also where confidentiality rules live. 

Constraint acknowledgement #1: If your company is 25 people and everyone knows everyone, true anonymity is hard. Don’t pretend. Instead, keep peer feedback developmental (not directly tied to pay) and combine it with manager evidence.

Step 6: Brief raters so they rate behavior, not vibes

Most “politics” is actually lazy rating.

Your rater brief can be 5 bullets:

  • Rate observable behaviors.
  • Use one example per negative point.
  • Focus on the review period, not ancient history.
  • Avoid personality labels (lazy, arrogant, dramatic).
  • Keep it about work impact.

IES flags that raters should be briefed and prepared, which aligns with this. 

Constraint acknowledgement #2: People are busy. If you don’t make the brief easy, they will rush, and rushed feedback tends to be emotional and vague.

Examples and templates you can steal

Rater mix examples by role type

Ops lead in a logistics company (Africa context)

  • Manager: Head of Operations
  • Peers: Warehouse lead, Fleet supervisor
  • Cross-functional: Finance partner (cost control), Customer support lead (complaints), Sales lead (service level)
  • Direct reports: 2 shift supervisors

HRBP in a regulated company

  • Manager: HR director
  • Peers: Talent acquisition lead, L&D lead
  • Cross-functional: Compliance, Legal, Payroll
  • Internal customers: 2 business unit managers

Engineering team lead in a fintech

  • Manager: Engineering manager
  • Peers: 2 senior engineers, QA lead
  • Cross-functional: Product manager, DevOps/SRE
  • Direct reports: 3 engineers (aggregate only)

Table: Who to include vs exclude when selecting reviewers

Rater type Include when Avoid when Why it matters
Same-team peers They co-own work, share deadlines, or depend on each other weekly They rarely collaborate or the relationship is purely social They can speak to day-to-day execution and collaboration
Cross-functional partners They receive outputs (finance, product, sales, compliance, customer support) They only see outcomes, not behaviors, and have no context They test reliability, responsiveness, and handoff quality
Direct reports The person manages people and you can protect confidentiality with enough raters Team is too small to anonymize, or there is fear of consequences They see coaching, clarity, fairness, and decision habits
Internal customers The role is service-heavy and interactions are frequent They only interacted once or during a crisis They reveal consistency, not just peak moments
“Work friends” They also collaborate deeply and can cite examples They are selected mainly for loyalty Friend-only feedback inflates ratings and destroys trust

A lightweight rater nomination form (what you actually ask)

  • Name
  • Relationship (peer, cross-functional, direct report, internal customer)
  • What work do they see? (project, workflow, deliverable)
  • How often do you interact? (weekly, monthly, quarterly)
  • Any conflict of interest? (yes/no)

Constraint acknowledgement #3: Documentation is usually weak. If you don’t require “what work do they see,” selection becomes social.

A note on tools (so this doesn’t live in spreadsheets forever)

If you’ve ever tried running peer reviews in a shared sheet, you already know how it ends: version conflicts, leaking comments, and too much manual follow-up.

This is where a system helps. I’ve seen HR teams in Africa move faster when they use a platform that:

  • lets you set reviewer rules,
  • collects 360 Feedback in one place,
  • and gives Analytics on response rates and rater group coverage.

Talstack’s Performance Reviews and 360 Feedback modules fit that workflow: you can run structured cycles, collect peer input, and review results without passing files around.

Quick Checklist (use this before you launch)

  • You wrote selection rules before collecting names
  • Each person has a balanced rater mix (not one category)
  • Raters are chosen for work exposure, not closeness
  • Conflicts and retaliation risks were reviewed by HR
  • Confidentiality thresholds are set for each rater group
  • Raters received a short behavior-based briefing
  • Managers know how peer feedback will be used (development vs decisions)
  • You have a plan for non-responders (2 reminders, then replace)

Copy-paste scripts

Script 1: Message to employees (how rater selection works)

Subject: Peer review rater selection for this cycle

Hi team,
For this peer review cycle, you can nominate reviewers, but you won’t fully control the final list.

Here’s the rule: reviewers must have direct visibility into your work during this review period.
Please nominate 8–12 people across peers and cross-functional partners. If you manage people, include your direct reports too.

HR and your manager will confirm the final list to keep the mix balanced and protect confidentiality.

Thanks.

Script 2: Message to raters (behavior-based expectations)

Hi, thanks for being a reviewer.
Please keep feedback specific and work-based.

Good feedback: “When we ran the month-end close, she flagged reconciliation issues early and gave options.”
Not helpful: “She’s difficult.”

If you share a concern, include one example from this review period and the impact on work.

Script 3: Manager script for a pushback conversation

I hear you. You’re worried about who is reviewing you.
The selection isn’t about liking you. It’s about who sees your work.

Here’s what we can do: you propose names, then we balance the list so it includes people who collaborate with you and people who receive your outputs.
If there’s someone you believe is a conflict risk, tell me why and HR will review it.

FAQs

How many reviewers should someone have in peer reviews?

Enough to cover the person’s real work relationships and protect confidentiality. Many guides recommend multiple raters for validity and confidentiality protection. The IES guide discusses minimum rater counts as part of validity considerations. 

Should employees pick their own reviewers?

They can nominate, but full control is risky because people tend to select favorable raters. USC’s Center for Effective Organizations highlights this gaming risk in multi-rater feedback systems.

Should peer reviews be anonymous?

If they aren’t, you’ll often get either sugarcoating or fear-based silence. If you can’t truly anonymize due to team size, be honest and keep peer feedback developmental, not a pay lever.

What if the team is too small to anonymize direct reports?

Combine categories (peers and partners together), use fewer open-text prompts, and rely more on manager evidence. Or skip upward feedback for that cycle if it creates real risk.

How do I prevent “revenge ratings”?

Selection guardrails, conflict checks, and rater briefing reduce it. Also, use aggregation rules so one person can’t dominate outcomes. FEMA’s 360 guidance emphasizes careful administration of multi-rater feedback processes.

Should we include cross-functional reviewers?

Yes, when they receive the person’s outputs or work with them in critical workflows (handoffs, escalations, approvals). Cross-functional raters are often where you see reliability and responsiveness clearly.

Can we use peer feedback for promotions and pay?

You can, but it raises the political temperature. If you do, tighten your process: stronger rater selection controls, stronger confidentiality thresholds, and manager evidence as the decision anchor.

One next step

Take 30 minutes and build your rater selection rules as a one-page policy (bullets only). Then run one pilot cycle with 10 people and measure two things: response rate by rater group and how often comments include concrete examples. That’s the quickest way to see if your reviewer selection is producing signal or noise.

Related posts

i

Article

How to Set Performance Expectations That Match Ratings

April 11, 2026

6 Mins Read

i

Article

How to Discuss Ratings Without Demotivating Employees

April 10, 2026

6 Mins Read

i

Article

How to Reduce Favouritism in Performance Reviews

April 9, 2026

5 Mins Read

Article

How Talstack is Transforming Employee Engagement and Productivity

18 January, 2024 • 5 Mins read

News

Talstack Launches Innovative People Management Solutions

18 January, 2024 • 5 Mins read

News

Talstack is Redefining Employee Engagement and Performance

18 January, 2024 • 5 Mins read