AI regulation is no longer theoretical. The EU AI Act is law. GDPR enforcement actions against AI companies are accelerating. China requires algorithmic registration. Every company building or deploying AI must now navigate a complex, fragmented, and rapidly evolving regulatory landscape.
Ignoring this is not an option โ fines under the EU AI Act reach โฌ35 million or 7% of global annual turnover, whichever is higher.
The EU AI Act
The EU AI Act is the world's first comprehensive AI law. It uses a risk-based framework that categorises AI systems into four tiers:
Unacceptable Risk (Banned)
Social scoring by governments (China-style citizen ratings)
Real-time biometric surveillance in public spaces (with narrow law enforcement exceptions)
Manipulative AI that exploits vulnerabilities (e.g., targeting children or people with disabilities)
Emotion recognition in workplaces and educational institutions
High-risk systems must: maintain detailed technical documentation, implement human oversight mechanisms, undergo conformity assessments, and register in an EU-wide database.
Limited Risk (Transparency Obligations)
Chatbots must disclose they are AI
Deepfake content must be labelled
Emotion recognition systems must inform users
Minimal Risk (No Regulation)
Spam filters, AI in video games, inventory optimisation
The vast majority of AI systems fall here
The EU AI Act classifies AI systems into four risk tiers โ most systems fall into minimal risk.
๐ง Quick Check
Under the EU AI Act, which AI application is classified as unacceptable risk?
GDPR and AI
The General Data Protection Regulation was not written for AI, but its principles apply powerfully:
Right to explanation โ Article 22 gives individuals the right not to be subject to purely automated decisions with legal effects. If an AI denies your loan, you can demand a human review and a meaningful explanation.
Data minimisation โ Collect only what is necessary. Training a model on "all available data" likely violates this principle.
Purpose limitation โ Data collected for one purpose cannot be repurposed for AI training without a legal basis.
Right to erasure โ If a user requests deletion, what happens to models trained on their data? "Machine unlearning" is an active research area precisely because of this.
Italy temporarily banned ChatGPT in 2023 over GDPR concerns โ a wake-up call for the entire industry.
๐คฏ
The Italian data protection authority (Garante) banned ChatGPT for 30 days in March 2023, making Italy the first Western country to block a major AI tool over privacy concerns. OpenAI restored access only after implementing age verification and a clearer privacy policy.
The United States Approach
The US takes a sector-specific approach rather than a single comprehensive law:
Executive Order on AI (October 2023) โ Requires safety testing for powerful AI models, establishes reporting requirements for companies training large models, and directs agencies to develop AI guidelines.
NIST AI Risk Management Framework โ Voluntary framework for identifying and mitigating AI risks.
State-level action โ Colorado passed the first state AI anti-discrimination law. California's SB 1047 (vetoed) proposed liability for catastrophic AI harms.
SEC guidance โ AI-washing (misleading claims about AI use) is under enforcement scrutiny.
The lack of a federal AI law creates a patchwork that is increasingly difficult for companies to navigate.
China's AI Regulations
China has moved faster than any other nation on AI-specific regulation:
Algorithmic recommendation regulations (2022) โ Users must be told when algorithms affect what they see and can opt out.
Deep synthesis regulations (2023) โ All AI-generated content must be watermarked and labelled.
Generative AI measures (2023) โ AI services must align with "core socialist values" and providers are liable for generated content.
Registration requirement โ All AI algorithms must be registered with the Cyberspace Administration of China.
๐ง Quick Check
Which jurisdiction requires mandatory registration of all AI algorithms with a government authority?
The UK Approach
The UK has deliberately chosen a pro-innovation, sector-led approach. Rather than a single AI law, existing regulators (FCA, Ofcom, ICO, CMA) apply AI-specific guidance within their domains using five cross-cutting principles: safety, transparency, fairness, accountability, and contestability.
The UK AI Safety Institute (now the AI Security Institute) focuses on frontier model evaluation rather than broad regulation โ positioning the UK as a testing ground rather than a rule-maker.
Copyright and AI Training
The most contentious legal battle in AI today: can you train models on copyrighted data?
Key cases shaping the answer:
Getty Images v Stability AI โ Training Stable Diffusion on Getty's image library without licence
NYT v OpenAI โ The New York Times alleging ChatGPT reproduces its articles
Authors Guild v OpenAI โ Thousands of authors claiming their books were used without consent
The outcome of these cases will fundamentally reshape how AI models are built. Some jurisdictions (Japan, Singapore) have created training exceptions. The EU AI Act requires transparency about training data sources. The US fair use doctrine remains untested at this scale.
๐ค
Think about it:If courts rule that training on copyrighted data requires licensing, only companies wealthy enough to pay for training data could build foundation models. Would this consolidate AI power in a few corporations, or is it a necessary protection for creators?
The EU AI Act mandates labelling of all AI-generated content
China requires watermarking of synthetic media
Several US states have criminalised non-consensual deepfake pornography
Election-specific deepfake laws are emerging globally (US, India, South Korea)
Technical solutions like C2PA (Coalition for Content Provenance and Authenticity) embed cryptographic provenance data into media โ a promising standard backed by Adobe, Microsoft, and the BBC.
Compliance Checklist for AI Companies
If you are building AI systems, start here:
Classify your system's risk tier under the EU AI Act
Audit your training data for copyright issues, personal data, and bias
Document everything โ model cards, data sheets, impact assessments
Implement human oversight for high-stakes decisions
Build explanation capabilities into your pipeline from day one
Monitor for discriminatory outcomes across protected characteristics
Establish an incident response plan for AI failures
Track regulatory developments โ this landscape changes quarterly
๐ง Quick Check
Under GDPR Article 22, what right do individuals have regarding automated AI decisions?
๐ค
Think about it:You are the CTO of an AI startup launching a hiring tool in both the EU and the US. The EU AI Act classifies this as high-risk. How would you design your compliance strategy to satisfy EU requirements without slowing your US market launch?
๐ Further Reading
EU AI Act Full Text โ Complete text with annotations and implementation timeline