US AI & Privacy Compliance
Navigate the patchwork of federal frameworks and state laws governing AI systems in the United States — from NIST AI RMF and FTC guidance to CCPA/CPRA and emerging state AI legislation.
Key Frameworks at a Glance
Unlike the EU, the US has no single comprehensive AI law. Compliance requires navigating multiple overlapping frameworks at the federal and state level.
NIST AI RMF
AI Risk Management Framework
Voluntary framework from NIST providing a structured approach to managing AI risks across the full AI lifecycle. Widely adopted as the US de facto standard.
FTC Act — Section 5
AI Transparency & Fairness
The FTC actively enforces against unfair or deceptive AI practices, including biased algorithms, false AI claims, and inadequate disclosure of automated decisions.
CCPA / CPRA
California Consumer Privacy Act
Grants California consumers rights regarding automated decision-making and profiling. Businesses must disclose when AI is used and allow consumers to opt out.
EEOC AI Guidance
Employment Bias
EEOC guidance warns that AI hiring tools can violate Title VII and ADA. Employers are liable for discriminatory outcomes even when using third-party AI vendors.
State AI Laws
CO, IL, TX and expanding
Colorado SB 205, Illinois AEDT law, and Texas HB 1709 impose AI impact assessments, bias audits, and disclosure requirements on specific AI use cases.
Executive Order on AI
Federal Agency Requirements
The AI EO directs federal agencies to adopt safety standards and risk management practices, influencing procurement requirements for companies contracting with the US government.
NIST AI RMF: The Four Core Functions
The NIST AI Risk Management Framework organises AI risk management into four interconnected functions. Together they provide a lifecycle approach to responsible AI.
GOVERN
Establish organisational policies, accountability structures, and culture for AI risk management. Define roles and responsibilities across the AI lifecycle.
MAP
Identify and categorise AI risks in context. Understand who is affected, what the use case is, and what potential harms could arise from the AI system.
MEASURE
Analyse and assess AI risks using quantitative and qualitative tools. Test for bias, performance gaps, security vulnerabilities, and reliability issues.
MANAGE
Prioritise and treat AI risks according to their severity. Implement risk responses, monitor in production, and maintain an up-to-date risk register.
CCPA / CPRA & Automated Decisions
The California Consumer Privacy Act (as amended by CPRA) includes specific provisions affecting AI systems that process California residents' personal information.
Right to Know
Consumers can request disclosure of whether automated decision-making is used and the logic behind it.
Right to Opt Out
Consumers may opt out of automated decision-making used for significant decisions including credit, employment, housing, and insurance.
Right to Correct
Consumers can request correction of inaccurate personal data used to train or feed AI systems.
Data Minimisation
Businesses may only collect personal data reasonably necessary for the disclosed purpose — no over-collection for AI training.
Sensitive Data Restrictions
Stricter rules apply to sensitive categories (health, race, biometrics, location) — frequently inputs to high-stakes AI systems.
CPRA Risk Assessments
Businesses must conduct and document risk assessments before processing personal data in ways that pose significant risk — including many AI use cases.
Best Practices for US AI Compliance
Maintain an AI system inventory with risk classifications for every AI tool in use
Conduct algorithmic impact assessments before deploying AI in high-stakes decisions
Document model cards for all AI systems including training data, intended use, and known limitations
Implement bias testing and monitoring for AI systems used in employment, credit, or housing decisions
Establish a human review process for consequential AI-assisted decisions
Update privacy notices to disclose AI and automated decision-making to end users
Review third-party AI vendor contracts for compliance representations and liability allocation
Official Sources & Further Reading
Primary regulatory texts and official guidance referenced in this guide.
NIST AI Risk Management Framework 1.0
The definitive US federal framework for managing AI risks across GOVERN, MAP, MEASURE, and MANAGE functions
NIST ↗FTC — AI Guidance & Resources
Federal Trade Commission guidance on AI transparency, fairness, and consumer protection obligations
FTC.gov ↗CCPA / CPRA — California Privacy Rights
California Consumer Privacy Act and CPRA regulations — the most comprehensive US state privacy law affecting AI systems
CA Privacy Protection Agency ↗EEOC — AI & Algorithmic Fairness Guidance
EEOC guidance on using AI in employment decisions and avoiding discriminatory impact under Title VII and ADA
EEOC ↗NYC Local Law 144 — Automated Employment Tools
New York City law requiring bias audits for automated employment decision tools used in hiring and promotion
NYC DCA ↗NIST AI RMF Playbook
Actionable guidance, suggested actions, and references for implementing the NIST AI RMF across all four functions
NIST ↗Simplify US AI & Privacy Compliance
ComplyLayer maps your AI systems to NIST AI RMF, CCPA, and state laws, generating the documentation and audit trails your compliance team needs.
14-day free trial · No credit card required