As of February 2, the European Union has started enforcing the AI Act, banning AI systems that pose an “unacceptable risk” to individuals or society. The act, which became law on August 1, introduces four risk categories for AI: minimal, limited, high, and unacceptable.
While minimal and limited-risk AI face little to no regulation, high-risk AI will be closely monitored, and unacceptable-risk AI is completely prohibited. These banned systems include AI used for social scoring, subliminal manipulation, biometric profiling, and predicting crimes based on appearance.
Real-time biometric surveillance in public spaces, emotion recognition in schools or workplaces, and AI that scrapes facial recognition data without consent are also forbidden.
Companies violating these regulations, regardless of location, face fines of up to €35 million or 7% of their annual revenue.
However, penalties won’t take effect immediately. Legal experts note that full enforcement will begin in August when competent authorities will be designated, and fines will be imposed. Ahead of the compliance deadline, over 100 companies, including Google, Amazon, and OpenAI, voluntarily signed the EU AI Pact, committing to apply AI Act principles early.
Notably, Meta, Apple, and Mistral did not sign, though they are still expected to comply with the regulations.
Some exemptions exist within the AI Act, particularly for law enforcement. AI systems collecting biometric data in public may be used in cases of abduction or imminent threats, provided legal authorization is granted.
Additionally, emotion-detecting AI in workplaces and schools may be allowed for medical or safety reasons.
The European Commission plans to issue further guidelines in early 2025, though concerns remain about how the AI Act will align with existing regulations like GDPR and cybersecurity laws.