AI EducademyAIEducademy
AcademicsLabBlogAbout
Log inSign up
AI EducademyAIEducademy

MIT Licence. Open Source

Learn

  • Academics
  • Lessons
  • Lab

Community

  • GitHub
  • Contribute
  • Code of Conduct
  • About

Support

  • Buy Me a Coffee โ˜•
AI & Engineering Academicsโ€บ๐ŸŒฒ AI Forestโ€บLessonsโ€บAI Regulation
โš–๏ธ
AI Forest โ€ข Advancedโฑ๏ธ 16 min read

AI Regulation

AI Regulation

AI regulation is no longer theoretical. The EU AI Act is law. GDPR enforcement actions against AI companies are accelerating. China requires algorithmic registration. Every company building or deploying AI must now navigate a complex, fragmented, and rapidly evolving regulatory landscape.

Ignoring this is not an option โ€” fines under the EU AI Act reach โ‚ฌ35 million or 7% of global annual turnover, whichever is higher.

The EU AI Act

The EU AI Act is the world's first comprehensive AI law. It uses a risk-based framework that categorises AI systems into four tiers:

Unacceptable Risk (Banned)

  • Social scoring by governments (China-style citizen ratings)
  • Real-time biometric surveillance in public spaces (with narrow law enforcement exceptions)
  • Manipulative AI that exploits vulnerabilities (e.g., targeting children or people with disabilities)
  • Emotion recognition in workplaces and educational institutions

High Risk (Heavily Regulated)

  • AI in recruitment and employment decisions
  • Credit scoring and insurance risk assessment
  • Medical diagnostic systems
  • Critical infrastructure management (energy, water, transport)
  • Law enforcement and border control systems

High-risk systems must: maintain detailed technical documentation, implement human oversight mechanisms, undergo conformity assessments, and register in an EU-wide database.

Limited Risk (Transparency Obligations)

  • Chatbots must disclose they are AI
  • Deepfake content must be labelled
  • Emotion recognition systems must inform users

Minimal Risk (No Regulation)

  • Spam filters, AI in video games, inventory optimisation
  • The vast majority of AI systems fall here
Pyramid diagram showing EU AI Act risk tiers from unacceptable at top to minimal at base
The EU AI Act classifies AI systems into four risk tiers โ€” most systems fall into minimal risk.
๐Ÿง Quick Check

Under the EU AI Act, which AI application is classified as unacceptable risk?

GDPR and AI

The General Data Protection Regulation was not written for AI, but its principles apply powerfully:

  • Right to explanation โ€” Article 22 gives individuals the right not to be subject to purely automated decisions with legal effects. If an AI denies your loan, you can demand a human review and a meaningful explanation.
  • Data minimisation โ€” Collect only what is necessary. Training a model on "all available data" likely violates this principle.
  • Purpose limitation โ€” Data collected for one purpose cannot be repurposed for AI training without a legal basis.
  • Right to erasure โ€” If a user requests deletion, what happens to models trained on their data? "Machine unlearning" is an active research area precisely because of this.

Italy temporarily banned ChatGPT in 2023 over GDPR concerns โ€” a wake-up call for the entire industry.

๐Ÿคฏ
The Italian data protection authority (Garante) banned ChatGPT for 30 days in March 2023, making Italy the first Western country to block a major AI tool over privacy concerns. OpenAI restored access only after implementing age verification and a clearer privacy policy.

The United States Approach

The US takes a sector-specific approach rather than a single comprehensive law:

  • Executive Order on AI (October 2023) โ€” Requires safety testing for powerful AI models, establishes reporting requirements for companies training large models, and directs agencies to develop AI guidelines.
  • NIST AI Risk Management Framework โ€” Voluntary framework for identifying and mitigating AI risks.
  • State-level action โ€” Colorado passed the first state AI anti-discrimination law. California's SB 1047 (vetoed) proposed liability for catastrophic AI harms.
  • SEC guidance โ€” AI-washing (misleading claims about AI use) is under enforcement scrutiny.

The lack of a federal AI law creates a patchwork that is increasingly difficult for companies to navigate.

China's AI Regulations

China has moved faster than any other nation on AI-specific regulation:

  • Algorithmic recommendation regulations (2022) โ€” Users must be told when algorithms affect what they see and can opt out.
  • Deep synthesis regulations (2023) โ€” All AI-generated content must be watermarked and labelled.
  • Generative AI measures (2023) โ€” AI services must align with "core socialist values" and providers are liable for generated content.
  • Registration requirement โ€” All AI algorithms must be registered with the Cyberspace Administration of China.
๐Ÿง Quick Check

Which jurisdiction requires mandatory registration of all AI algorithms with a government authority?

The UK Approach

The UK has deliberately chosen a pro-innovation, sector-led approach. Rather than a single AI law, existing regulators (FCA, Ofcom, ICO, CMA) apply AI-specific guidance within their domains using five cross-cutting principles: safety, transparency, fairness, accountability, and contestability.

The UK AI Safety Institute (now the AI Security Institute) focuses on frontier model evaluation rather than broad regulation โ€” positioning the UK as a testing ground rather than a rule-maker.

Copyright and AI Training

The most contentious legal battle in AI today: can you train models on copyrighted data?

Key cases shaping the answer:

  • Getty Images v Stability AI โ€” Training Stable Diffusion on Getty's image library without licence
  • NYT v OpenAI โ€” The New York Times alleging ChatGPT reproduces its articles
  • Authors Guild v OpenAI โ€” Thousands of authors claiming their books were used without consent

The outcome of these cases will fundamentally reshape how AI models are built. Some jurisdictions (Japan, Singapore) have created training exceptions. The EU AI Act requires transparency about training data sources. The US fair use doctrine remains untested at this scale.

๐Ÿค”
Think about it:If courts rule that training on copyrighted data requires licensing, only companies wealthy enough to pay for training data could build foundation models. Would this consolidate AI power in a few corporations, or is it a necessary protection for creators?

Deepfake Regulation

Deepfakes pose unique regulatory challenges โ€” existing laws struggle to keep pace:

  • The EU AI Act mandates labelling of all AI-generated content
  • China requires watermarking of synthetic media
  • Several US states have criminalised non-consensual deepfake pornography
  • Election-specific deepfake laws are emerging globally (US, India, South Korea)

Technical solutions like C2PA (Coalition for Content Provenance and Authenticity) embed cryptographic provenance data into media โ€” a promising standard backed by Adobe, Microsoft, and the BBC.

Compliance Checklist for AI Companies

If you are building AI systems, start here:

  1. Classify your system's risk tier under the EU AI Act
  2. Audit your training data for copyright issues, personal data, and bias
  3. Document everything โ€” model cards, data sheets, impact assessments
  4. Implement human oversight for high-stakes decisions
  5. Build explanation capabilities into your pipeline from day one
  6. Monitor for discriminatory outcomes across protected characteristics
  7. Establish an incident response plan for AI failures
  8. Track regulatory developments โ€” this landscape changes quarterly
๐Ÿง Quick Check

Under GDPR Article 22, what right do individuals have regarding automated AI decisions?

๐Ÿค”
Think about it:You are the CTO of an AI startup launching a hiring tool in both the EU and the US. The EU AI Act classifies this as high-risk. How would you design your compliance strategy to satisfy EU requirements without slowing your US market launch?

๐Ÿ“š Further Reading

  • EU AI Act Full Text โ€” Complete text with annotations and implementation timeline
  • NIST AI Risk Management Framework โ€” The US voluntary framework for AI risk governance
  • C2PA Technical Specification โ€” The content provenance standard for combating deepfakes
Lesson 8 of 100 of 10 completed
โ†Edge AIAI Infrastructureโ†’