The U.S. Equal Employment Opportunity Commission received 88,531 new discrimination charges in FY2024, a 9.2% jump over the prior year, and recovered nearly $700 million for more than 21,000 victims (EEOC, 2025). Adverse impact in hiring is the statistical fingerprint that often triggers those cases: a facially neutral selection practice that screens out one demographic group at a substantially lower rate than another. Pin’s recruiter-grade AI is built without feeding demographic data into the matching model, but compliance is more than a vendor choice. It starts with understanding what the law measures and where the failures hide.
This guide covers the working definition under the EEOC’s Uniform Guidelines and the four-fifths rule with a clean numerical example. It maps where the disparity shows up across funnel stages, walks the 2026 state-by-state AI hiring law landscape, and closes with a seven-step playbook to reduce risk. Every stat is sourced.
What Is Adverse Impact in Hiring?
Adverse impact is defined by the EEOC’s Uniform Guidelines on Employee Selection Procedures (UGESP):
“A substantially different rate of selection in hiring, promotion or other employment decision which works to the disadvantage of members of a race, sex or ethnic group” (29 CFR § 1607.4(D), 1978).
It is an outcome measure, not a motive measure. A test, screen, interview rubric, or algorithm can produce that disparity without anyone on the hiring team intending to discriminate.
How Does Adverse Impact Differ from Disparate Treatment and Disparate Impact?
Adverse impact is the statistical evidence; disparate impact is the legal theory it triggers; disparate treatment is the intentional-discrimination cousin. Title VII of the Civil Rights Act of 1964 recognizes two separate theories of liability. Disparate treatment requires proof of intentional discrimination, the recruiter who refuses to interview applicants over 50 because of their age. Disparate impact requires no intent; the Supreme Court established it in Griggs v. Duke Power Co., 401 U.S. 424 (1971) and Congress codified it in the Civil Rights Act of 1991. Conflating the three is the most common mistake in HR compliance reviews, and the practical consequence is a selection process that is over-defended on intent and under-defended on outcomes.
Bottom line:
- Adverse impact is statistical, not motivational. It can exist even when a hiring team designed the selection process in good faith. The four-fifths rule is the federal threshold for “substantially different” selection rates.
- Disparate impact is still actionable in 2026. A federal executive order in April 2025 directed the EEOC to deprioritize disparate-impact-only enforcement, but private plaintiffs retain full standing under Title VII. State laws in California, Illinois, Colorado, and New York City keep the same standard.
- AI tools amplify the risk. A University of Washington study of 3M+ resume comparisons found large language models preferred white-associated names 85% of the time and Black-associated names just 9% (UW, 2024). Algorithmic selection inherits and concentrates historical bias.
- Pin’s matching model never receives demographic data. Names, gender, and protected characteristics are stripped before any AI scoring, and Pin pulls from professional networks, GitHub, Stack Overflow, patents, and the broader web rather than a single biased source.
- The mitigations are well-known and evidence-backed. Structured interviews, multi-measure assessments, ongoing 4/5 monitoring, skills-based hiring, and anonymized review each have peer-reviewed support behind them.
What we’re seeing. The pattern that surprises most recruiters in our 2026 user survey is that selection-rate disparities rarely appear at offer. They appear at the resume screen, where they are invisible without per-stage tracking. A team will pull 200 applicants for a role, score them in an ATS, and surface the top 30 to the hiring manager. If the screening criteria correlate with a protected characteristic, the gap is already baked in by the time anyone interviews a candidate.
Pin’s per-stage funnel analytics break out selection rates by stage so recruiters catch the failure where it happens, not at the offer letter, when the only fix is starting over. Having built Interseller before Pin, the team has watched recruiters spend weeks investigating offer-stage disparities that originated in a keyword filter five steps upstream. The earlier you measure, the cheaper the fix.
How Does the Four-Fifths Rule Work?
The four-fifths rule, also called the 4/5 rule or 80% rule, is the federal threshold that flags a selection-rate disparity as legally significant. From 29 CFR § 1607.4(D):
“A selection rate for any race, sex, or ethnic group which is less than four-fifths (4/5) (or eighty percent) of the rate for the group with the highest rate will generally be regarded by the Federal enforcement agencies as evidence of adverse impact” (UGESP, 1978).
In practice, you calculate it in four steps. Take a hiring round with 135 total applicants, 72 non-minority and 63 minority, who all sit for the same written test as part of the screen.
| Group | Applicants | Hires | Selection rate |
|---|---|---|---|
| Non-minority | 72 | 7 | 9.7% |
| Minority | 63 | 3 | 4.8% |
The non-minority group has the highest selection rate at 9.7%. The adverse impact ratio for the minority group is 4.8 ÷ 9.7 = 49.5%. Because 49.5% is less than 80%, the four-fifths rule indicates adverse impact for that minority group. The same hiring round can pass for one demographic dimension and fail for another. If 4 of 51 female applicants and 6 of 84 male applicants from that same pool were hired, the gender ratio is 7.8% ÷ 7.1% = 110%, which clears the threshold.
Two operational rules trip up most teams. First, you have to compute the impact ratio separately for every group that comprises more than 2% of the applicant pool. Second, finding a failure on the 4/5 rule is not automatically a finding of discrimination. UGESP gives the employer two paths to defend the practice. Validate that the selection procedure is “job related for the position in question and consistent with business necessity,” or replace it with a less-discriminatory alternative that serves the same business need. The 4/5 rule triggers that obligation; it does not prove the case.
Three caveats from the EEOC are worth holding in mind. Very small samples can produce misleading ratios in either direction. Statistically significant gaps can constitute a violation even when the 4/5 ratio is met. And failure to maintain selection-rate records lets the agency infer the disparity regardless of whether it has been measured.
Where Does Adverse Impact Show Up in the Hiring Funnel?
Most teams treat the issue as a hiring decision audit. Looking only at offers misses where the disparity actually originated. Across the standard hiring workflow, every filter is a potential failure point.
Sourcing. The pool you start with caps every downstream rate. Sourcing exclusively from a single network, university list, or referral chain narrows the demographic mix before any selection happens. Multi-source sourcing is the upstream lever, which is part of why Pin pulls candidates from professional networks, GitHub, Stack Overflow, patents, and academic publications rather than one platform.
Resume and application screening. Keyword filters, GPA thresholds, “top school” lists, and degree requirements all carry well-documented disparate impact. A study by the Burning Glass Institute and Harvard Business School found that companies publicly dropping degree requirements often did not change who they hired. Fewer than 1 in 700 large-firm hires reflected the new policy (2024).
Pre-hire assessments. Cognitive ability tests have the largest demographic score gaps of any common predictor. A 2024 UK meta-analysis of more than 2 million observations reported general mental ability gaps of d = 0.65 for Black applicants and d = 0.33 for Asian applicants relative to white applicants (European Journal of Work and Organizational Psychology, 2024). The validity of those tests does not eliminate the adverse impact obligation.
Interviews. Unstructured interviews show both lower predictive validity and higher demographic disparity than structured interviews. The structure of the interview, not the interviewer’s intent, drives most of the gap.
Background and reference checks. Overly broad criminal history filters disproportionately exclude Black and Hispanic applicants. EEOC guidance on the use of arrest and conviction records in hiring requires individualized assessment and job-relatedness for any disqualifying record. Modern background check tools increasingly support adjudication matrices that fold this into the workflow.
Offer and negotiation. Pay-history requests, anchor-low offers, and inflexible negotiation policies create disparities that show up in compensation studies long after the hire is closed.
Algorithmic Adverse Impact and the AI Hiring Stack
AI in hiring has crossed the inflection point. SHRM’s 2025 Talent Trends report (n=2,040) found that 43% of organizations now use AI in HR tasks, up from 26% in 2024, and 44% of those using AI for recruiting use it to screen resumes (SHRM, 2025). That adoption rate is also a risk concentration. When an algorithmic filter sits in front of every applicant, a small per-applicant bias becomes a large aggregate disparity.
The University of Washington’s October 2024 audit of three large language models put numbers on it. Across more than 3 million resume comparisons against 550+ real job listings, the models preferred white-associated names 85% of the time over Black-associated names, which were preferred only 9% (UW, 2024). Black male-associated names were never preferred over white male names in the sample. The bias was not advertised. It was inherited from the training data and surfaced through hiring decisions.
The legal exposure is concrete. In September 2023, iTutorGroup paid $365,000 to settle the EEOC’s first AI-discrimination lawsuit. The hiring software had automatically rejected female applicants over 55 and male applicants over 60, affecting more than 200 qualified U.S. applicants (EEOC, 2023). In May 2025, a federal court conditionally certified an ADEA collective action in Mobley v. Workday covering all applicants 40 and older who applied through Workday’s AI screening tools since September 2020, potentially expanding into millions of plaintiffs (Holland & Knight, 2025).
Two practical implications for any team running algorithmic resume screening. First, employers remain liable for adverse impact caused by third-party vendor AI; “the vendor’s tool did it” is not a Title VII defense. Second, demand a bias audit, by protected group, from any vendor before signing. The EEOC’s 2023 technical assistance on AI selection tools was withdrawn from the agency website in January 2025, but the underlying Title VII obligations were not amended. The guidance is operative law even though the document is no longer hosted on eeoc.gov.
What Does the 2026 Legal Landscape Look Like?
A December 2025 audit by the New York State Comptroller reviewed just 32 companies for NYC Local Law 144 compliance and found at least 17 instances of potential non-compliance. During the same window, the enforcing agency had received only two AEDT complaints (NY OSC, 2025). That enforcement gap is the headline number for AI hiring compliance heading into 2026: the underlying obligation has teeth that the formal complaint pipeline has not yet exercised. Federal enforcement also shifted in 2025 but did not collapse. An April 2025 executive order titled “Restoring Equality of Opportunity and Meritocracy” directed the EEOC to close pending administrative charges based solely on disparate impact theory within 45 days (Duane Morris, 2025). The order does not change the statute, only enforcement priority, and private plaintiffs retain full standing to sue under Title VII. State laws meanwhile have moved in the opposite direction, building out detailed AI-in-hiring rules with their own private rights of action.
A multi-state employer in 2026 has to track at least five overlapping regimes:
| Jurisdiction | Law | Effective | Standard | Private right of action |
|---|---|---|---|---|
| New York City | Local Law 144 (AEDT) | July 5, 2023 | Disparate impact; annual independent bias audit required | No (DCWP enforcement, $500–$1,500/day) |
| California | Automated Decision Systems regulations | October 1, 2025 | Disparate impact via FEHA; vendor liability | Yes |
| Illinois | HB 3773 (AI in Employment) | January 1, 2026 | ”Has the effect of” discriminating; notice required | Yes; up to $70,000 for repeat offenders |
| Texas | HB 149 (Responsible AI Governance Act) | January 1, 2026 | Intent required; disparate impact alone insufficient | Limited |
| Colorado | SB 24-205 (Colorado AI Act) | June 30, 2026 | Reasonable care to prevent algorithmic discrimination; impact assessment required | Via Attorney General |
NYC Local Law 144 has been on the books since 2023 and is the most-watched implementation. A 2026 lawsuit can cite the OSC audit findings to establish notice, even when the formal complaint pipeline is empty. Colorado’s SB 24-205 is the most demanding regime for employers using AI hiring tools: an annual impact assessment, a documented risk management program, pre-decision notice to candidates, and a right of appeal. Texas’s HB 149 cuts the other direction, requiring proof of intent and explicitly rejecting disparate-impact-only claims. The result is that a single employer can face contradictory standards across jurisdictions.
How to Reduce Adverse Impact: 7 Evidence-Backed Practices
Seven practices have peer-reviewed support for reducing adverse impact: structured interviews, multi-measure assessments, job-relatedness validation, ongoing 4/5 monitoring, skills-based hiring, algorithmic auditing for AI tools, and anonymized resume review. Mitigation works when it is built into the selection design, not bolted on after the fact.
1. Use structured interviews. A 2023 update to the predictive-validity meta-analysis (Sackett, Zhang, Berry & Lievens, Industrial and Organizational Psychology) put structured interviews at mean validity r = .42, the highest of any common predictor, while producing substantially smaller demographic gaps than cognitive ability tests (Sackett et al., 2023). Same questions, same scoring rubric, multiple raters.
2. Stack multiple measures. No single screen should be the sole gate. Combining a structured interview, a work sample, and a job-relevant skills test reduces the demographic gap of any one method while improving overall validity. The TestGorilla 2024 State of Skills-Based Hiring survey reported that 91% of employers using two or more measurement methods were satisfied with their hires versus 80% for single-measure programs.
3. Validate every selection procedure. UGESP recognizes three validation strategies (criterion-related, content, and construct), and the validation evidence should be documented and refreshed when the role changes. Off-the-shelf pre-employment assessments often ship with technical manuals; if a vendor cannot supply one, that is the buying signal to keep looking.
4. Monitor selection rates by stage, every cycle. UGESP requires records to be maintained, and the EEOC can infer adverse impact when records are missing. Build a per-stage dashboard (resume screen, assessment, interview, offer) and recompute the 4/5 ratio each hiring round. Pin’s recruiting analytics surface stage-by-stage selection rates without exporting demographic data into the matching model.
5. Adopt skills-based hiring. TestGorilla’s 2024 report found 81% of employers now use skills-based hiring, up from 56% in 2022, and 90% of adopters report better diversity outcomes. Removing degree, GPA, and “top school” filters has the largest single demographic-mix lift available short of an active sourcing change.
6. Audit AI vendors. Require, in the master services agreement, an annual independent bias audit by protected group, plus the right to terminate without penalty if the audit fails. NYC LL 144 supplies a workable audit template; even outside New York, the framework is defensible. Avoid vendors that train scoring models on names, photographs, or anything correlated with demographics.
7. Anonymize the resume screen. Strip name, address, graduation year, and university from the initial review. Given the University of Washington study showing 85% / 9% LLM preference for white-associated names, any AI-assisted screen that sees a name carries elevated risk.
What Should You Do When You Find Adverse Impact?
The 4/5 ratio fails. What now? UGESP is explicit that the finding triggers an obligation, not an automatic violation. A defensible response moves through five steps.
Step 1, document the finding immediately. Record the role, the dates, the applicant pool by group, the selection rates, and the calculated impact ratio. Documentation is its own protection: failure to maintain records lets the EEOC infer adverse impact.
Step 2, validate the selection procedure. Show that the practice is “job related for the position in question and consistent with business necessity” using one of UGESP’s three validation strategies. If you cannot produce validation evidence, the procedure is presumptively unlawful.
Step 3, search for less-discriminatory alternatives. This step is not optional under UGESP. If a different selection method serves the same business need with a smaller adverse-impact ratio, the alternative is required. A common example: replacing a single cognitive-ability cutoff with a structured interview plus a work sample.
Step 4, retest after the change. Recalculate the 4/5 ratio after the alternative is in place. If the ratio still fails, repeat steps 2 and 3. If the ratio passes, document the change and the rationale.
Step 5, communicate the change to anyone affected. Internally to the hiring team and externally to candidates whose application was governed by the previous procedure. Transparency is a separate obligation under several state AI laws (Illinois HB 3773 requires applicant notice when AI is used; Colorado SB 24-205 requires pre-decision notice and a right to appeal).
Frequently Asked Questions
What is adverse impact in hiring?
Adverse impact in hiring is a statistical pattern. A facially neutral selection practice (a test, screen, interview, or algorithm) results in members of a race, sex, ethnic, or other protected group being selected at a substantially lower rate than the highest-rate group. The EEOC’s federal threshold for “substantially lower” is the four-fifths rule.
What is the 4/5 rule in hiring?
The four-fifths rule is codified at 29 CFR § 1607.4(D). A selection rate for any race, sex, or ethnic group below 80% of the rate for the highest-rate group is generally treated by federal enforcement agencies as evidence of disparate impact. It is a rule of thumb used to trigger further investigation, not a legal conclusion.
Is adverse impact illegal?
Not by itself. A finding of disparity triggers an obligation under Title VII for the employer to either validate the selection procedure as job-related and consistent with business necessity, or replace it with a less-discriminatory alternative. Failure to do either is what creates legal exposure.
How do you calculate adverse impact?
Compute the selection rate for each protected group (hires divided by applicants), identify the group with the highest rate, then divide each other group’s rate by that highest rate. If any group’s ratio is less than 80% (4/5), adverse impact is indicated. For a worked example with real numbers, see the four-fifths rule section above.
What is the difference between adverse impact and disparate treatment?
The first concept is outcome-based and intent-neutral, a statistical disparity caused by a facially neutral practice. Disparate treatment is intent-based and requires showing the employer treated someone less favorably because of a protected characteristic. Both are actionable under Title VII; they require different proof and trigger different defenses.
Does the 2025 executive order eliminate adverse impact liability?
No. The April 2025 executive order directed the EEOC to deprioritize disparate-impact-only enforcement at the federal level, but it did not amend Title VII. Private plaintiffs retain full standing to sue, and state laws in California, Illinois, Colorado, and New York City keep the same disparate-impact standard for hiring decisions.
Where to Start
The 4/5 ratio is a measurement, not a strategy. The strategy is fewer single-method gates, more validated multi-measure assessments, structured interviews scored by multiple raters, anonymized first-pass review, and a per-stage selection-rate dashboard that gets recalculated every hiring round. For teams running AI in any stage of the funnel, the same logic applies one layer up. Demand vendor bias audits by protected group, document validation evidence, and treat algorithmic adverse impact as the same legal obligation as a paper test from 1985.
For compliance-conscious recruiting teams, Pin is the natural fit: zero demographic data fed to AI, SOC 2 Type 2 certified, and 6x more diverse pipelines reported by users in our 2026 survey. Pin draws from professional networks, GitHub, Stack Overflow, patents, and the broader web rather than a single biased source, then exposes per-stage selection-rate analytics so the disparity surfaces early. The compliance work still belongs to the hiring team. The tooling should make the work cheaper to do, not harder to defend.