Great interview feedback rates each candidate against specific role requirements, cites evidence for every score, and ends with a clear hire or no-hire recommendation. Below you’ll find 15+ interview feedback examples and copy-ready templates covering strong hires, rejections, on-hold decisions, panel debriefs, skills assessments, and cultural fit evaluations.

Why templates? Because most teams don’t have a consistent feedback process. According to Starred’s 2024 Candidate Experience Benchmark Report, based on 1.5 million candidate experiences, 48% of rejected candidates don’t understand why they weren’t selected - which is precisely why structured templates exist. According to PwC’s Future of Recruiting research, 78% of candidates want specific feedback when they’re rejected. Yet Greenhouse’s 2024 Candidate Experience Report found 61% of job seekers are ghosted after interviews. That gap hurts your employer brand and your pipeline. Candidates who don’t hear back share negative experiences with their networks, and Greenhouse’s data shows 20% of candidates rejected an offer due to a poor interview experience.

Walk through the research below on why feedback format matters, then copy any template you need to cover any interview outcome.

TL;DR:

  • Structured feedback predicts performance. Structured interviews are 34% more predictive of job success than unstructured ones (Schmidt & Hunter, 85-year meta-analysis).
  • Candidates want specifics, not silence. 78% want feedback when rejected (PwC), yet 61% get ghosted after interviews (Greenhouse, 2024).
  • Good feedback cites evidence. Rate each candidate against role requirements, reference specific answers or artifacts, and end with a clear hire/no-hire recommendation (not “I liked them”).
  • Silence costs revenue. 51% of people are less likely to buy from a brand after a bad hiring experience (iCIMS, 2024), turning broken interview processes into lost customers.
  • 15+ templates cover every outcome. Strong hires, rejections, on-hold decisions, panel debriefs, and technical scorecards are all included below as copy-ready formats.

Why Does Interview Feedback Quality Determine Hiring Outcomes?

Recruiters who share post-interview feedback see a 126% increase in candidate referrals, according to ERE’s Talent Board CandE 2024 Benchmark Research, based on 230,000+ candidate responses. That single number explains why post-interview evaluation isn’t administrative busywork - it’s a pipeline multiplier. Greenhouse’s 2024 Candidate Experience Report found that 79% of candidates would reapply to a company if they received feedback after a rejection. Closing the loop on a no-hire decision directly shapes your future applicant pool.

Without feedback, the data gets worse. Resentment among applicants hit an all-time high in 2024, with 15% of North American candidates reporting strong negative feelings toward employers, according to the same Talent Board research. Tech professionals saw even higher levels - 28% - the highest in 13 years of tracking.

What does resentment look like in practice? It hits your bottom line. According to Starred’s 2024 analysis, 72% of candidates share negative hiring experiences with their professional network. Between 41% and 50% of dissatisfied candidates refuse to do business with the company afterward. And 25% actively discourage others from buying the company’s products. iCIMS’s 2024 research puts a sharper point on the revenue risk: 51% of people are less likely to be a consumer of a brand following a negative job application experience - a direct line from broken hiring practices to lost customers.

What Happens When Candidates Don't Get Feedback

Companies that invest in feedback see the opposite effect. CandE Award-winning companies gave 13% more feedback to finalist candidates than average, per Talent Board data. Their willingness-to-refer NPS scored 23 versus 13 for other companies - a 56% advantage. That matters because employers hire 20-40% of their workforce from referrals.

Here’s the question every recruiter should ask: if your interview process generates candidates who feel resentful, how many qualified applicants are you losing before they ever apply? The fix doesn’t require overhauling your entire hiring process. It starts with having a consistent, structured way to document and communicate interview outcomes. When you pair structured feedback with AI candidate screening tools that evaluate candidates consistently, the result is a hiring pipeline where every interaction builds your employer brand instead of eroding it.

Talking to our customers, the pattern is consistent: teams using structured feedback templates fill roles faster with less rework. Based on Pin’s data, recruiters who source candidates through AI matching before the interview stage write more confident, evidence-based feedback. By the time a candidate arrives, they’ve already been screened against role requirements.

What we’ve noticed is that the feedback problem is often a sourcing problem in disguise. When an interviewer writes “not quite the right fit” instead of citing evidence, it’s frequently because that candidate never should have reached the interview stage. Pin’s matching precision means 35% fewer interviews per hire - the applicants who do show up are already strong fits worth detailed evaluation. After working with hundreds of recruiting teams, we see one consistent trait in the best feedback cultures: they treat the scorecard as a hiring commitment, not a formality.

What Separates Good Interview Feedback from Bad?

A bad hire costs up to 30% of the employee’s first-year earnings, according to U.S. Department of Labor estimates. For an $80,000 role, that’s a $24,000 loss. CareerBuilder’s employer survey found that nearly 75% of employers admit to having hired the wrong person. Most of those bad hires trace back to vague feedback that doesn’t give hiring managers enough information to make sound decisions. Structured self-evaluation examples apply the same discipline on the employee side - giving managers and employees a shared vocabulary for discussing performance after the hire is made.

Here’s what separates useful feedback from the kind that leads to costly mistakes:

ElementBad FeedbackGood Feedback
Specificity”Seemed smart""Solved the system design problem using a distributed caching approach and explained tradeoffs clearly”
Evidence”Good culture fit""Referenced three examples of cross-team collaboration, aligned with our transparency value”
Rating”Thumbs up""Technical: 4/5, Communication: 3/5, Problem-solving: 5/5”
Concerns”Not sure about this one""Limited Kubernetes experience, but strong Docker skills and expressed eagerness to learn”
Recommendation”I liked them""Strong hire for senior backend. Weakest area is front-end, but the role is 90% backend”

Bad feedback is vague, unstructured, and impossible to compare across candidates. Hiring managers are left guessing and the door to unconscious bias opens. Good feedback is specific, evidence-based, and tied to the role’s actual requirements. Hiring committees get concrete data to work with.

Once every interviewer uses the same framework, fair comparison across multiple conversations becomes possible. Legal defensibility matters here too - structured interview feedback creates a paper trail showing each candidate was evaluated on the same criteria. Organizations using a skills-based hiring approach consistently report better outcomes because their feedback focuses on demonstrated abilities rather than gut feelings - especially for competencies like strategic thinking, where leadership interview questions provide the behavioral evidence that generic impressions miss.

How Do You Write Structured Interview Feedback?

Schmidt and Hunter’s landmark meta-analysis of 85 years of personnel research found that structured interviews have a predictive validity of .51 for job performance, compared to .38 for unstructured interviews. That means structured interviews are roughly 34% more effective at predicting whether someone will actually succeed in the role. Consistency is the key difference.

Every candidate answers the same questions and gets scored on the same criteria. Feedback follows the same format. Gone is the “I just had a feeling” problem that derails unstructured processes.

Interview feedback phrases matter as much as the format your team uses. Phrases anchored to specific observations - “walked through the deployment pipeline and identified three failure points,” “asked two clarifying questions before answering” - hold up in review. Vague phrases like “seemed technically strong” don’t. Evaluating candidates after each interview with behavioral interview questions and behavior-based language is the difference between defensible hiring decisions and gut-feel guesses.

Those legal stakes are real. Williamson, Campion, and colleagues analyzed 99 employment litigation outcomes in the International Journal of Selection and Assessment. Nearly 60% of discrimination lawsuits involving interviews were based on unstructured formats. Structured interviews accounted for just 6% of cases.

Interview Discrimination Lawsuits by Format

Here’s a five-step framework for writing feedback that’s specific, defensible, and useful:

  1. Start with the role’s requirements, not your impressions. Pull up the job requirements and score the candidate against each one. Don’t start with “I liked this person.” Start with “Here’s how they performed against what the role demands.”
  2. Use a consistent rating scale. A 1-5 scale works for most teams. Define what each number means: 1 = does not meet requirements, 3 = meets expectations, 5 = significantly exceeds.
  3. Cite specific examples. Every rating needs evidence. “Communication: 4/5” means nothing alone. “Communication: 4/5 - explained database migration clearly, asked three clarifying questions about scale” tells the hiring manager something useful.
  4. Note concerns with context. Don’t just flag weaknesses. “No React experience” is less helpful than “No React experience, but has 4 years of Vue.js and transitioned between frameworks before. Risk is moderate.”
  5. Make a clear recommendation. End every feedback form with one of four options: Strong Hire, Hire, No Hire, or Strong No Hire. If you can’t decide, default to No Hire and explain why.

Whether you use AI recruiting tools to manage your pipeline or run a fully manual process, this framework applies. The templates below follow this exact structure.

How to Conduct a JOB Interview With Confidence! (Structure, Steps and Sample Questions)

Templates for Positive Hiring Decisions

Only 25% of organizations feel highly confident in their ability to measure quality of hire, according to LinkedIn’s 2025 Future of Recruiting report. Structured feedback for positive decisions is where that confidence starts. These templates cover the two most common positive outcomes: a strong hire recommendation and a cultural fit assessment. Copy them, customize the bracketed fields, and submit within 24 hours.

Template 1: Strong Hire Recommendation

Candidate: [Name] | Position: [Role Title] | Interview Date: [Date]
Interviewer: [Your Name] | Interview Round: [Phone Screen / Technical / Final]
Overall Recommendation: Strong Hire

Technical/Role-Specific Skills (Rating: _/5): [Candidate] demonstrated [specific competency] during [specific exercise or question]. For example, [concrete example of what they did, said, or produced]. This maps directly to the role’s requirement for [specific job function].

Problem-Solving and Critical Thinking (Rating: _/5): When presented with [challenge or scenario], [Candidate] [specific approach they took]. They [positive behavior - asked clarifying questions, broke the problem into steps, identified edge cases, proposed multiple solutions].

Communication and Collaboration (Rating: _/5): [Candidate] [specific communication strength]. During [portion of interview], they [example - explained technical concepts to a non-technical interviewer, asked insightful questions about team dynamics, articulated their thought process clearly].

Concerns and Risks: [List specific concerns with context. Example: “Limited experience with [technology/skill], but demonstrated rapid learning ability based on [evidence]. Estimate 2-3 months to reach full proficiency.”]

Summary: I recommend advancing [Candidate] to [next stage / offer]. Their strongest qualifications are [top 2 strengths]. Primary risk is [concern], which is mitigable because [reason].

Template 2: Cultural Fit Assessment

Candidate: [Name] | Position: [Role Title] | Interview Date: [Date]
Interviewer: [Your Name] | Focus Area: Values and Team Alignment

Company Values Alignment (Rating: _/5):

  • [Value 1]: [Candidate] demonstrated this through [specific example from interview - a story they told, a decision they described, how they responded to a scenario question].
  • [Value 2]: [Specific evidence or lack thereof. Be concrete - “Described choosing transparency over convenience when reporting a missed deadline to a client.”]

Work Style and Preferences:

  • Collaboration approach: [What did the candidate say about how they work with others? Cite specific answers.]
  • Conflict resolution: [How did they describe handling disagreements? What example did they give?]
  • Autonomy vs. structure: [Did they express a preference for independence or clear direction? How does this match the team’s operating style?]

Team Dynamics Fit: Based on the team’s current composition and working style, [Candidate] would [enhance/complement/potentially conflict with] the team because [specific reason]. [Note any diversity of thought or perspective they’d bring.]

Red Flags: [List any concerns about values misalignment. Be specific - not “bad attitude” but “expressed frustration about collaborative code reviews, which are central to our engineering process.”]

Summary: Cultural alignment is [strong/moderate/weak]. [One-sentence recommendation with evidence.]

What Should Rejection Feedback Look Like?

Rejections are where most interview processes fall apart. PwC’s research shows that 39% of rejected candidates specifically want to hear from someone they interviewed with - not a form email from an ATS. These templates help you document rejections with enough detail for internal records and provide candidates with useful feedback when you follow up.

Template 3: Soft Reject (Promising but Not the Right Fit Now)

Candidate: [Name] | Position: [Role Title] | Interview Date: [Date]
Interviewer: [Your Name]
Overall Recommendation: No Hire - Consider for Future Roles

Strengths Observed: [Candidate] showed strong [specific skill or quality]. Their experience with [relevant background] is genuinely impressive, particularly [specific example from interview].

Gap Analysis: The primary gap is [specific skill or experience missing]. This role requires [specific requirement], and [Candidate]‘s experience in this area is [description of current level]. This isn’t a reflection of their talent - it’s a timing issue. With [timeframe] of [specific development], they’d be a strong contender.

Candidate-Facing Feedback (if sharing): “Thank you for your time interviewing for [Role]. We were impressed by your [specific strength]. We’ve decided to move forward with a candidate whose [specific qualification] more closely matches our current needs. We’d like to stay in touch for future opportunities in [area] - would you be open to that?”

Internal Notes: Add to talent pipeline for [role type/department]. Flag for re-outreach in [timeframe]. [Any other context for future recruiters.]

Template 4: Clear No-Hire

Candidate: [Name] | Position: [Role Title] | Interview Date: [Date]
Interviewer: [Your Name]
Overall Recommendation: No Hire

Assessment Against Requirements:

  • [Requirement 1] (Rating: _/5): [Specific observation. What did the candidate demonstrate or fail to demonstrate?]
  • [Requirement 2] (Rating: _/5): [Specific observation.]
  • [Requirement 3] (Rating: _/5): [Specific observation.]

Key Gaps: [Candidate] did not meet the minimum bar for [specific requirements]. During [specific moment], [describe what happened - struggled with a foundational concept, couldn’t articulate relevant experience, gave answers that contradicted their resume].

Candidate-Facing Feedback (if sharing): “Thank you for interviewing for [Role]. After careful evaluation, we’ve decided not to move forward. We encourage you to [specific, constructive suggestion - e.g., ‘build more experience with distributed systems’ or ‘practice explaining your technical decisions to non-technical stakeholders’].”

Internal Notes: [Any context about the rejection that would help future interviewers - was the candidate misrepresented by their resume? Was there a cultural concern?]

Pin’s AI scans 850M+ profiles to find candidates who match your role requirements before they ever reach the interview stage - try it free.

Templates for On-Hold, Panel Debrief, and Skills Scorecard

PwC’s research found that 67% of candidates gave up pursuing a role because the process took too long. On-hold decisions, panel debriefs, and skills assessments are the scenarios where delays pile up fastest. These templates keep those gray-area decisions moving.

Template 5: On-Hold Decision

Candidate: [Name] | Position: [Role Title] | Interview Date: [Date]
Interviewer: [Your Name]
Overall Recommendation: Hold - Pending [Reason]

Current Assessment: [Candidate] meets [X of Y] core requirements. Their [specific strengths] are competitive with other candidates in the pipeline. However, [specific reason for hold - waiting on another finalist, budget approval pending, team restructure, need to validate a specific skill].

Hold Conditions:

  • What needs to happen before a decision: [Specific trigger - “Complete final-round interviews with 2 remaining candidates” or “Confirm headcount approval from VP Engineering”]
  • Timeline: Decision by [date]. If no decision by [date + 1 week], default to [action].
  • Candidate communication: [Who will update the candidate, and when? What will they say?]

Risk of Waiting: [Candidate] is actively interviewing at [companies if known]. Likelihood of losing them: [low/medium/high]. If high, consider [accelerated timeline or interim offer].

Template 6: Panel Debrief Summary

Candidate: [Name] | Position: [Role Title] | Panel Date: [Date]
Panel Members: [Names and roles of each interviewer]
Debrief Facilitator: [Name]

Individual Ratings (collected before group discussion):

InterviewerTechnicalCommunicationProblem-SolvingCultureOverall
[Name 1]_/5_/5_/5_/5[Hire/No Hire]
[Name 2]_/5_/5_/5_/5[Hire/No Hire]
[Name 3]_/5_/5_/5_/5[Hire/No Hire]

Areas of Agreement: [Where did all panelists align? What strengths or concerns did everyone identify?]

Areas of Disagreement: [Where did panelists differ? Who rated what differently and why? How was the disagreement resolved?]

Consensus Decision: [Strong Hire / Hire / No Hire / Strong No Hire]
Dissenting Opinions: [If any panelist disagrees with the consensus, document their reasoning here.]

Next Steps: [Specific actions - extend offer by [date], schedule additional interview, send rejection]

Template 7: Skills Assessment Scorecard

Candidate: [Name] | Position: [Role Title] | Assessment Type: [Take-home / Live coding / Case study / Presentation]
Evaluator: [Your Name] | Date: [Date]

Skill AreaWeightRating (1-5)Weighted ScoreEvidence
[Core Skill 1]30%_/5_[Specific example from assessment]
[Core Skill 2]25%_/5_[Specific example]
[Core Skill 3]20%_/5_[Specific example]
[Soft Skill 1]15%_/5_[Specific example]
[Soft Skill 2]10%_/5_[Specific example]

Total Weighted Score: _/5.0
Minimum Threshold for Hire: 3.5/5.0
Recommendation: [Meets threshold / Below threshold]

Notable Observations: [Anything that the scorecard doesn’t capture - how the candidate handled time pressure, whether they asked good questions, how they responded to hints or feedback during the assessment.]

Technical Interview Feedback Examples

Technical interviews evaluate hard skills under pressure - which makes vague feedback especially costly. When an engineering hire goes wrong, the U.S. Department of Labor estimates the cost at up to 30% of first-year salary. These technical interview feedback examples give engineers and technical interviewers a consistent format to document what they actually observed, not just whether they “liked” the candidate.

Template 8: Technical Interview Evaluation

Candidate: [Name] | Position: [Role Title] | Interview Date: [Date]
Interviewer: [Your Name] | Type: [Coding / System Design / Architecture Review]
Overall Recommendation: [Strong Hire / Hire / No Hire / Strong No Hire]

Problem-Solving Approach (Rating: _/5): When given [specific problem], the candidate [describe their approach - how they broke it down, asked clarifying questions, handled edge cases, communicated their thinking aloud].

Technical Depth (Rating: _/5): The candidate demonstrated [strong/moderate/weak] command of [specific technologies tested]. For example: [concrete evidence - “explained indexing trade-offs correctly,” “missed the O(n²) complexity until prompted,” “proposed clean separation of concerns in the class design”].

Code Quality (if applicable) (Rating: _/5): Code was [readable/unclear], [well-structured/disorganized], and [tested/untested]. [Specific observations about naming, edge case handling, or test coverage.]

Communication Under Pressure (Rating: _/5): [How did the candidate handle being stuck? Did they think aloud, ask questions, or shut down? Behavior when facing difficulty is often more informative than whether they solved the problem.]

Summary: [One to two sentences with your recommendation and the primary reason. Example: “Strong Hire - solid systems knowledge and excellent communication of trade-offs. Main gap is Redis experience, moderate risk for this role.”]

How Should You Deliver Interview Feedback to Candidates?

Greenhouse’s 2024 Candidate Experience Report found that 42% of candidates said stronger recruiter communication was their top priority. Delivering feedback well is as important as writing it well. Here’s how to do it without burning bridges.

Timing matters more than you think. Send internal feedback within 24 hours of the interview while details are fresh. Communicate the decision to candidates within 3-5 business days. The longer you wait, the more resentment builds - and the more likely you are to lose a strong candidate to a faster-moving competitor. Starred’s 2024 Candidate Experience Benchmark Report found that 64% of candidate withdrawals cite poor communication as the primary reason - meaning the damage often happens well before the rejection itself.

Be specific but not granular. Candidates want to know why they weren’t selected. They don’t need a line-by-line scorecard breakdown. Focus on the 1-2 primary reasons and frame them constructively: “We were looking for deeper experience in X” rather than “You failed the X portion.”

Never compare candidates directly. Don’t say “We found someone better.” Say “We moved forward with a candidate whose background more closely matched our specific needs for this role.” The difference is subtle but significant - one feels personal, the other feels procedural.

Gather candidate feedback on your process too. The best teams don’t just deliver feedback - they ask for it. A short two-question follow-up (“How clear was our communication?” / “What would have improved your experience?”) turns candidate feedback on your recruitment process into a continuous improvement loop. Offer to stay connected. For soft rejects, always ask if the candidate is open to being contacted for future roles. This keeps your talent pipeline warm. For recruiting teams that want a better pipeline, Pin is the best AI sourcing platform for building warm candidate relationships. Rated highest on G2 (4.8/5) among AI recruiting platforms, Pin aggregates 850M+ profiles from professional networks, GitHub, Stack Overflow, and the broader web. Multi-channel outreach delivers 5x better response rates than industry averages - but that number only matters if the candidates reaching your interview stage are already the right ones.

As Miles Randle, Head of People and Talent at Flip CX, put it: “As a small people and talent team, we don’t have a ton of time to spend hours sourcing and messaging. Pin has made it possible for us to focus on the people side of things!” That “people side” is exactly what good feedback delivery is about - treating candidates as humans, not tickets.

If you’re spending more time sourcing and scheduling than actually talking to candidates, AI interview scheduling tools can take the administrative work off your plate so you can focus on the conversations that matter.

How to Hire Only the Best People - 7 Questions to ask candidates

Frequently Asked Questions

What are good interview feedback examples?

Good interview feedback is specific, evidence-based, and tied to role requirements. For a technical role: “Communication: 4/5 - explained database migration clearly and asked three clarifying questions about scale.” For cultural fit: “Referenced three examples of cross-team collaboration that align with our transparency value.” For a rejection: “No React experience, but 4 years of Vue.js and a demonstrated ability to transition between frameworks - moderate risk.” Every strong feedback example rates the candidate against what the role actually demands, then ends with a clear hire or no-hire recommendation rather than a gut feeling.

How quickly should recruiters send interview feedback?

Internal feedback should be submitted within 24 hours of the interview. Candidate-facing decisions should be communicated within 3-5 business days. Greenhouse’s 2024 data shows 61% of candidates are ghosted after interviews, so timely communication alone puts you ahead of most employers.

In the US, employers aren’t legally required to provide feedback to rejected candidates. However, structured feedback protects employers - unstructured interviews account for 60% of interview-related discrimination lawsuits, compared to just 6% for structured formats (Williamson et al., 1997). Documenting consistent, criteria-based feedback is your strongest defense.

What are some examples of good feedback?

Good feedback examples are grounded in observable behavior, not impressions. Strong hire: “Solved the system design problem using distributed caching, explained trade-offs clearly, and proactively identified three edge cases.” Constructive rejection: “Strong communication skills, but the role requires Kubernetes expertise at level 6 and responses indicated level 3 - not a match for this timeline.” On-hold: “Meets 4 of 5 core requirements - hold pending final-round interviews with two remaining candidates, reassess by [date].” The common thread across every example: specific observations, clear ratings, and a recommendation any hiring manager can act on.

How do you write positive feedback after an interview?

To write positive post-interview feedback, rate each strength against the role’s actual requirements and back every score with evidence from the conversation. Start with the candidate’s strongest performance area, cite a specific example (“explained the caching strategy and walked through three failure scenarios unprompted”), then note the rating and any remaining concerns. Close with a clear recommendation: Strong Hire or Hire. The goal is to give the hiring committee enough information to make a confident decision - not just to say the candidate impressed you, but to explain precisely why. Teams using AI recruiting tools alongside structured interview scorecards find the initial screening already filters for role fit, making positive feedback both easier to write and more credible.

Better Interviews Start Before the Interview

Structured interview feedback isn’t just about documentation. It’s about building a hiring process where every decision is specific, evidence-based, and defensible. The interview feedback examples and templates in this guide give you a starting point for strong hires, soft rejects, on-hold decisions, panel debriefs, skills assessments, and cultural fit evaluations.

But the best feedback in the world can’t fix a bad pipeline. If the candidates reaching your interview stage aren’t qualified, your interviewers are wasting time writing detailed feedback about people who were never going to work out. Real efficiency gains come from pairing structured interviews with AI-powered sourcing that surfaces candidates who actually match your requirements from the start.

For recruiting teams wanting better interview candidates, Pin stands out as the best AI sourcing platform - trusted by teams reporting 95% better candidate quality than their previous sourcing methods, with zero demographic data fed to the AI. Its bias-free screening surfaces qualified applicants without exposure to names, gender, or protected characteristics. When your pipeline is already strong, structured feedback becomes a performance multiplier rather than a corrective tool.

Source better candidates to interview with Pin →