The European Commission has rejected calls to pause or delay the implementation of the EU's Artificial Intelligence Act (AI Act), despite pressure from major tech companies and industry leaders. The Commission has stated that the legislation will proceed exactly as planned, with no stop, pause, or grace period.
According to Thomas Regnier, a commission spokesperson, "There is no stop the clock. There is no grace period. There is no pause." This decision comes after over 45 leading European companies, including Airbus, ASML, Lufthansa, Mercedes-Benz, Siemens Energy, and Mistral, urged the European Commission to "stop the clock" for at least two years on the AI Act's most stringent requirements.
The companies argue that the regulatory burdens imposed by the AI Act may stifle innovation and unfairly penalize smaller players who lack the legal and financial resources to navigate complex compliance frameworks. However, the European Commission has emphasized its commitment to implementing the AI Act as planned, with provisions taking effect according to the original timeline.
The AI Act, which aims to regulate the development and use of artificial intelligence in the EU, introduces a risk-based approach, categorizing AI systems based on their potential to cause harm. The Act outlines three primary categories: unacceptable risk, high-risk, and limited risk. Developers of high-risk AI applications will need to register their systems in an EU database and comply with robust risk and quality management obligations, data governance requirements, human oversight, cybersecurity measures, and transparency rules to gain access to the EU market.
Key dates to watch include August 2, 2025, when obligations for providers of new general-purpose AI (GPAI) models will begin, and August 2026, when stricter rules for high-risk AI systems will apply. Non-compliance can result in fines of up to 7% of global revenue.