The Colorado Artificial Intelligence Act takes effect on February 1, 2026. If your organization uses AI in hiring decisions and has operations or applicants in Colorado, you need to prepare now.
This guide explains what the Colorado AI Act requires and how it differs from NYC LL144.
What Is the Colorado AI Act?
The Colorado AI Act (SB 21-169) is one of the most comprehensive state AI regulations in the United States. Unlike NYC LL144, which focuses specifically on automated employment decision tools, Colorado's law covers all "high-risk AI systems" used in "consequential decisions."
Consequential decisions include employment, but also education, housing, credit, healthcare, insurance, and legal services. For this article, we'll focus on the employment implications.
What Counts as High-Risk AI in Employment?
Under the Colorado AI Act, high-risk AI in employment includes systems used for:
- Hiring and recruitment decisions
- Termination decisions
- Compensation and benefits decisions
- Promotion decisions
- Job assignments or task allocation
- Performance monitoring and evaluation
This is broader than NYC's definition. LL144 covers tools that "substantially assist or replace" hiring decisions. Colorado covers any AI that makes or is a "substantial factor" in these decisions.
Key Requirements
1. Risk Management Policy
Deployers (companies using AI systems) must implement a risk management policy that identifies intended uses, analyzes discrimination risks, and implements mitigation measures.
2. Impact Assessment
Before deploying a high-risk AI system, you must conduct an impact assessment documenting the system's purpose, discrimination risks, data sources, performance metrics, and mitigation steps.
3. Consumer Disclosure
You must notify applicants that AI is being used, explain what it does, and provide information about how to request human review or correct inaccurate data.
4. Duty to Avoid Algorithmic Discrimination
Deployers have an affirmative duty to use "reasonable care" to protect consumers from algorithmic discrimination based on protected classes.
How It Differs from NYC LL144
| Aspect | NYC LL144 | Colorado AI Act |
|---|---|---|
| Scope | AEDTs in hiring/promotion only | All high-risk AI in consequential decisions |
| Audit Required | Yes, annual independent audit | Impact assessment (can be internal) |
| Public Posting | Yes, audit summary required | No public posting |
| Appeal Process | Alternative process on request | Right to human review of adverse decisions |
| Enforcement | DCWP, $500-$1,500/day penalties | Attorney General |
Why You Can't Just Reuse Your LL144 Compliance
If you're already compliant with NYC LL144, you have a foundation—but Colorado requires more:
- Broader scope: You may have AI tools not covered by LL144 (performance monitoring, task allocation) that need assessment.
- Risk management policy: LL144 doesn't require this.
- Impact assessment documentation: Different from the bias audit summary.
- Enhanced disclosure: Colorado requires more detailed consumer disclosure.
How Paritas Helps
A Paritas bias audit directly supports your Colorado impact assessment. Our analysis of discrimination risk by protected class, documented methodology, and remediation recommendations provide the evidence you need to demonstrate you're using "reasonable care" to avoid algorithmic discrimination.
Our Professional and Enterprise plans include multi-jurisdiction compliance mapping that shows how your audit findings apply to both NYC LL144 and Colorado AI Act requirements.