The European Union’s Artificial Intelligence Act, Known as the EU AI -AGOwas described of the European Commission as “the first comprehensive AI law of the world.” After years in manufacturing, it gradually becomes part of the reality for the 450 million people living in the 27 countries, which contain the EU.
The act of EU AI, however, is more than a European issue. It applies to companies both local and foreign, and it can affect both suppliers and deployers of AI systems; The European Commission cites examples of how it applies to a CV developer, and a bank that buys that tool. Now all of these parties have a legal framework that sets the stage for their use of AI.
Why is there EU AI -day?
As usual with the EU law, there is an EU AI law to ensure that there is a uniform legal framework applying to any topic through the EU countries -the theme this is AI. Now that the regulation is in place, it must “ensure the free movement, crossland, AI-based goods and services” without diverging local limitations.
With timely regulation, the EU seeks to create a flat playing field across the region and to feed faithwhich could also create opportunities for emerging companies. However, the common framework it has adopted is not correctly allowed: despite the relatively early stage of widespread AI adoption in most sectors, the EU AI law sets up a high bar for what AI must and should not do for society broader.
What is the purpose of the EU AI -day?
To European lawmakers, the main purpose of the framework is to “promote the adoption of human central and reliable AI simultaneously ensuring high protection of health, security, fundamental rights as turned on in the Charter of Fundamental Rights of the European Union, including democracy, the rule and environmental protections against the AI systems of AI systems.”
Yes, this is quite mouth, but it is worth analyzing carefully. First, because it will depend on how you define “human central” and “reliable” AI. And second, because it gives a good sense of the precarious balance to keep between divergent goals: innovation against preventing damage, as well as adoption of AI Vs. environmental protection. As usual with EU law, the devil will be in the details again.
How does EU AI -AGO balance its different goals?
In order to balance damaging prevention against the possible benefits of AI, the EU AI law has adopted A Risk-based access: Ban a handful of cases of “unacceptable risk”; Flagging a set of “high risk” uses by calling for tight regulation; and applying lighter bonds to “limited risks” scenarios.
Techcrunch -Event
San -Francisco
|
27-29 October 2025
Has the EU AI -AGO validated?
Yes and no. The EU AI AI Act roll started on August 1, 2024, but it will only take effect through a series of Shaken Executive Deadline Dates. In most cases, it will also apply earlier to new members than to companies that already offer AI products and services in the EU.
The first deadline came into force on February 2, 2025, and focused on forcing bans on a small number of banned uses of AI, such as unexpected scraping the Internet or CCTV for facial images to build or expand databases. Many others will follow, but unless the schedule changes, most provisions apply in mid -2026.
What changed on August 2, 2025?
Since August 2, 2025, the EU AI Act applies to “general target AI models with systematic risk.”
GPAI models are AI models trained with a large amount of data, and this can be used for a wide range of tasks. This is where the risky element comes. According to the EU AI law, GPAI models can come Systemic risks; “For example, by diminishing barriers for chemical or biological weapons, or unintended issues of control over autonomous [GPAI] Models. ”
Prior to the deadline, the EU released Guidelines For providers of GPAI models, which include both European companies and non-European players such as Antropic, Google, Meta, and Openai. But since these companies already have models on the market, they will also have until August 2, 2027, to fulfill, unlike new participants.
Does the EU AI have teeth?
The EU AI law comes with penalties that lawmakers wanted to be at the same time “effective, proportional and different” – even for large global players.
Details will be defined by the EU countries, but the regulation exhibits the general spirit – that penalties vary depending on the considered risky level – as well as thresholds for each level. Violation of prohibited AI applications leads to the highest penalty of “up to € 35 million or 7% of the total global turn of the previous financial year (which is higher).”
The European Commission can also cause fines of up to 15 million € or 3% of annual turning to GPAI Models.
How quickly does the existing players intend to perform?
The Volunteer GPAI Practice CodeIncluding units such as non -training models of hacked content, it is a good indicator of how companies can collaborate with frame law until forced to do so.
In July 2025, meta announced it would not sign the Volunteer GPAI Practice Code intended to help such suppliers fulfill the EU AI law. However Google soon after confirmed that it will signDespite reserves.
Signatories So far includes Aleph Alpha, Amazon, Antropic, Cohere, Google, IBM, Microsoft, Mistral AI, and Openai, among others. But as we saw with the example of Google, signing does not match full support.
Why (some) technology companies opposed these rules?
While stating in a Blog post That Google would sign the voluntary GPAI code of practice, its president of global affairs, Kent Walker, still had reservations. “We remain worried that the AI law and code risk slow down the development and deployment of Europe from AI,” he wrote.
Meta was more radical, with his chief executive officer of Global Affairs Joel Kaplan stating in Post On LinkedIn that “Europe is directing the wrong path on AI.” Calling the implementation of the EU of the AI law “dominance”, he stated that the Code of Practice “introduces some legal uncertainties for model developers, as well as measures that go far more than the scope of the AI law.”
European companies also expressed concerns. Arthur Mensch, the general manager of French AI champion Mistral Ai, was part of a group of European CEOs who signed an open letter In July 2025 prompting Brussels “Stop the Clock” For two years before key Obligations of the EU AI Act came into force.
Will the schedule change?
In early July 2025, the European Union responded negatively to lobbying efforts calling for a break, saying it would still remain to its timeline for the implementation of the EU AI law. It went ahead with August 2, 2025, a deadline as planned, and we will update this story if something changes.