LONDON — The European Union’s world-first artificial intelligence law formally took effect on Thursday, marking the latest milestone in the bloc’s efforts to regulate the technology.
Officials say the Artificial Intelligence Act will protect the “fundamental rights” of citizens in the 27-nation bloc while also encouraging investment and innovation in the booming AI industry.
Years in the making, the AI Act is a comprehensive rulebook for governing AI in Europe, but it could also act as a guidepost for other governments still scrambling to draw up guardrails for the rapidly advancing technology.
The AI Act covers any product or service offered in the EU that uses artificial intelligence, whether it’s a platform from a Silicon Valley tech giant or a local startup. The restrictions are based on four levels of risk, and the vast majority of AI systems are expected to fall under the low-risk category, such as content recommendation systems or spam filters.
“The European approach to technology puts people first and ensures that everyone’s rights are preserved,” European Commission Executive Vice President Margrethe Vestager said. “With the AI Act, the EU has taken an important step to ensure that AI technology uptake respects EU rules in Europe.”
The provisions will come into force in stages, and Thursday’s implementation date starts the countdown for when they’ll kick in over the next few years.
AI systems that pose “unacceptable risk,” such as social scoring systems that influence how people behave, some types of predictive policing and emotion recognition systems in schools and workplaces, will face a blanket ban by February.
Rules covering so-called general-purpose AI models like OpenAI’s GPT-4 system will take force by August 2025.
Brussels is setting up a new AI Office that will act as the bloc’s enforcer for the general purpose AI rules.
OpenAI said in a blog post that it’s “committed to complying with the EU AI Act and we will be working closely with the new EU AI Office as the law is implemented.”
By mid-2026, the complete set of regulations, including restrictions on high-risk AI such as systems that decide who gets a loan or that operate autonomous robots, will be in force.
There’s also a fourth category for AI systems that pose a limited risk, and face transparency obligations. Chatbots must be informed that they’re interacting with a machine and AI-generated content like deepfakes will need to be labelled.
Companies that don’t comply with the rules face fines worth as much as 7% of their annual global revenue.