The main data privacy laws relevant to Artificial Intelligence (AI) 

The main data privacy laws relevant to Artificial Intelligence (AI) fall into two categories: comprehensive data protection regulations that apply to the personal data used by AI, and AI-specific regulations that impose unique requirements based on the AI system’s potential risk.


1. Global AI-Specific Regulation

The most significant and comprehensive AI-specific law is the European Union (EU) AI Act, which directly regulates AI systems based on a risk-based approach:

  • Unacceptable Risk: Bans AI systems that are considered a clear threat to people’s rights (e.g., social scoring).
  • High Risk: Imposes strict obligations on systems that could cause risk to health, rights, or well-being (e.g., AI in recruitment, critical infrastructure, or law enforcement). Obligations include data quality and governancerequirements, human oversighttransparency, and risk assessment/mitigation.
  • Limited Risk: Requires specific transparency obligations so users know they are interacting with an AI (e.g., chatbots).
  • Minimal/No Risk: Allows free use with no specific obligations (e.g., email spam filters).

2. Comprehensive Data Protection Laws (Global)

These laws govern the processing of personal data—the foundation for many AI systems—and apply regardless of the technology used.

General Data Protection Regulation (GDPR) – EU 🇪🇺

The GDPR is highly relevant to AI, especially in its requirements for:

  • Lawfulness, Fairness, and Transparency: Personal data used to train or operate AI models must have a legal basis, be processed fairly, and be clearly explained to individuals.
  • Purpose Limitation and Data Minimisation: Data should only be collected for specified, explicit, and legitimate purposes and should be limited to what is necessary for those purposes. This is often a challenge for AI, which thrives on large datasets.
  • Data Subject Rights: Individuals have rights to access, rectify, and erase their personal data, which can impact AI training data.
  • Automated Decision-Making and Profiling (Article 22): Individuals have the right not to be subject to a decision based solely on automated processing (including profiling) that produces legal effects or similarly significantly affects them, with limited exceptions (e.g., explicit consent or contractual necessity). AI systems must incorporate human review and explanation in high-stakes contexts.
  • Data Protection Impact Assessments (DPIAs): Required for processing operations likely to result in a high risk to individuals’ rights, which often includes AI systems.

3. US State Comprehensive Privacy Laws

While there is no single federal US data privacy law comparable to the GDPR, a growing number of state laws include specific provisions relevant to AI, particularly around automated decision-making and profiling. Key examples include:

  • California Consumer Privacy Act (CCPA) as amended by the California Privacy Rights Act (CPRA):Grants consumers the right to opt-out of the use of their personal information for “profiling” in furtherance of decisions that produce legal or similarly significant effects. It also includes requirements for data protection assessments for certain high-risk processing activities.
  • Virginia Consumer Data Protection Act (VCDPA), Colorado Privacy Act (CPA), and Connecticut Data Privacy Act (CTDPA): These laws grant consumers the right to opt-out of the processing of their personal data for the purpose of profiling in furtherance of automated decisions that produce legal or similarly significant effects. They also require data protection assessments for high-risk processing, which explicitly includes profiling and targeted advertising.
  • Other States (e.g., Utah, Delaware, Texas): Many other states, such as Utah, Delaware, and Texas, have also enacted comprehensive privacy laws with similar provisions on consumer rights related to their personal data, which inherently impact how AI systems can operate on that data

Suggested Internal Links

Leave a comment