Quick answer
Governments around the world are rushing to regulate AI. The EU has the most comprehensive law (the EU AI Act). The US is taking a lighter-touch approach. China is strictly controlling AI content. Here is what each approach means in practice.
AI moved faster than governments expected. For years, regulation lagged far behind technology. But 2024 and 2025 changed that — major laws and executive orders started taking effect, and 2026 is the year many of them come into full force. Here is what is actually happening, in plain English.
The EU AI Act — the world's strictest AI law
The European Union passed the world's first comprehensive AI law in 2024. It classifies AI systems by risk level and applies different rules to each:
- Unacceptable risk (banned): Social scoring systems, real-time biometric surveillance in public, AI that manipulates people against their will
- High risk (strict rules): AI used in hiring, credit scoring, healthcare, education, law enforcement — these must be transparent, auditable, and human-supervised
- Limited risk (transparency required): Chatbots must tell you they are AI; deepfakes must be labelled
- Minimal risk (no restrictions): Most AI tools like spam filters, AI in games, and recommendation systems
Companies that violate the EU AI Act face fines up to €35 million or 7% of global annual revenue — whichever is higher. Because the EU is such a large market, this law effectively applies to any global company serving European users.
The United States — a patchwork approach
The US has taken a more fragmented approach. There is no single federal AI law. Instead, there are executive orders (which can be changed by the next president), sector-specific guidance from agencies like the FDA and FTC, and state-level laws — California has been the most active.
In 2025, the US focused on voluntary commitments from major AI companies (OpenAI, Google, Anthropic, Meta) around safety testing and watermarking AI-generated content. Critics say voluntary commitments are not enough; supporters say they are faster and more flexible than legislation.
China — strict content control
China has some of the world's most detailed AI content rules, particularly for generative AI. Chinese AI tools must ensure their outputs align with "socialist core values," cannot generate content that undermines state authority, and must register with regulators. This has shaped what Chinese AI tools like DeepSeek and Baidu ERNIE can and cannot say.
What this means for you
- AI tools you use will increasingly include disclaimers and transparency notices
- If your company uses AI for HR decisions, legal compliance requirements are growing fast
- AI-generated images and videos will need to be labelled in many jurisdictions
- Deepfakes used maliciously are becoming criminal offences in more places
- Your data rights around AI training are expanding — you can opt out in some regions
Practical note: If you work in HR, finance, healthcare, or law — or if you build products using AI — you should familiarise yourself with the EU AI Act even if you are not based in Europe. Global companies are building to its standard.
Related reading
Bottom line
AI regulation is real and accelerating. The EU is setting the global standard through sheer market size. The US is catching up. The key question for businesses in 2026 is not whether AI regulation will affect them — it is whether they are ready.
