The legal landscape
- EEOC has stated that Title VII applies to algorithmic hiring tools. If a vendor's AI causes disparate impact, the employer is liable.
- NYC Local Law 144 (AEDT) requires bias audits before use, plus annual re-audits and candidate notification.
- EU AI Act classifies hiring AI as "high-risk" requiring conformity assessments.
- Several states (Illinois, California) have AI-in-hiring laws phased in 2025–2026.
What a bias audit looks like
- Run the AI on a representative dataset.
- Compute selection rate by protected class (gender, race, age, disability).
- Apply the 4/5ths rule: any group's selection rate < 80% of the highest = potential disparate impact.
- Document and either fix the model or stop using it.
Practical guidance
- Vendor due diligence: Get the audit report from your AI vendor. Don't take "we audit internally" as an answer.
- Don't black-box. If you can't explain why the AI rejected a candidate, you can't defend it.
- Keep human in the loop. AI surfaces, humans decide. This single principle solves 80% of the legal exposure.
- Document overrides. When a recruiter goes against the AI ranking, capture the reason. Pattern of overrides correlated with protected class is a red flag.