ComplyLayer Back
EU Regulation 2016/679

GDPR & AI Systems

If your AI system processes personal data about EU or UK residents, GDPR applies. Understand which articles govern AI, when a DPIA is mandatory, and how to protect data subjects' rights in automated processing.

When Does GDPR Apply to AI?

GDPR applies to any processing of personal data — and most AI systems process personal data. This includes data used to train a model, data input at inference time, and data generated or inferred by the model (such as a predicted score, category, or recommendation about an individual).

GDPR applies to your AI system if it:

  • Is trained on datasets containing personal data
  • Takes personal data as input at runtime (e.g. name, email, health record)
  • Generates inferences or predictions about identifiable individuals
  • Is used to make or support decisions that affect EU/UK residents
  • Involves profiling — automated processing to evaluate personal aspects

Key GDPR Articles for AI Systems

Several articles have heightened relevance when AI systems process personal data.

Art. 22

Automated Individual Decision-Making

Individuals have the right not to be subject to a decision based solely on automated processing — including profiling — which produces legal or similarly significant effects. Exceptions exist (consent, contract necessity, legal authorisation) but require safeguards: human review on request, the ability to express one's point of view, and the ability to contest the decision.

ProfilingAutomated decisionsHuman review
Art. 13/14

Transparency & Privacy Notices

When collecting personal data, controllers must provide meaningful information about the logic involved in any automated processing, as well as the significance and envisaged consequences for the data subject. This means AI systems used in consequential decisions must be explained in accessible language.

TransparencyExplainabilityNotice
Art. 35

Data Protection Impact Assessment (DPIA)

A DPIA is mandatory before processing that is likely to result in high risk to individuals — specifically including systematic and extensive profiling with significant effects, processing special category data at large scale, and systematic monitoring of public areas. Most high-stakes AI systems will require a DPIA.

DPIA requiredHigh-risk processingPre-deployment
Art. 5

Data Minimisation & Purpose Limitation

Personal data must be adequate, relevant, and limited to what is necessary (minimisation). It cannot be used for purposes incompatible with the original collection purpose (limitation). AI training on historical data must be assessed against both principles.

Data minimisationPurpose limitationLawful basis
Art. 25

Privacy by Design

Controllers must implement data protection principles from the earliest stages of AI system design — not as an afterthought. This includes technical measures such as anonymisation, pseudonymisation, and access controls built into the AI pipeline.

Privacy by designDefault settingsArchitecture

Conducting a DPIA for AI Systems

A Data Protection Impact Assessment is a structured process for identifying and minimising data protection risks. For AI systems, a DPIA should cover:

Describe the Processing

Document what data is collected, how the model uses it, and what outputs are generated.

Assess Necessity & Proportionality

Demonstrate that personal data processing is necessary and proportionate to the AI's purpose.

Identify & Assess Risks

Enumerate risks to data subjects: discrimination, breach, loss of control, inaccurate decisions.

Identify Mitigating Measures

Document technical and organisational measures to address each identified risk.

Consult Data Subjects (if needed)

Where appropriate, seek the views of data subjects or their representatives.

DPO Review & Sign-off

The Data Protection Officer must review the DPIA and advise on residual risks before deployment.

Data Subject Rights in the AI Context

All standard GDPR rights apply to AI-related processing. Here is how they manifest specifically in AI systems.

Right to Access

Individuals can request what personal data was used in a model or decision, including training data where feasible.

Right to Erasure

Individuals can request deletion of their data — which may require retraining or fine-tuning a model if it has memorised personal information.

Right to Rectification

Inaccurate personal data used by an AI system must be corrected, and decisions re-evaluated if necessary.

Right to Object to Profiling

Individuals can object at any time to profiling — the controller must stop unless compelling legitimate grounds exist.

Right to Human Review

For Article 22 decisions, individuals can request that a human re-examine any automated decision and explain the reasoning.

Right to Data Portability

Personal data provided by a user must be available in a machine-readable format, including data that has been input into or processed by an AI system.

Official Sources & Further Reading

Primary regulatory texts and official guidance referenced in this guide.

EU AI Act ready Set up in minutes Auto-generated docs

Automate GDPR Compliance for Your AI Systems

ComplyLayer generates DPIA templates, maps your AI systems to GDPR articles, and tracks data subject rights requests — all in one platform.

Start Free Trial

14-day free trial · No credit card required