The European Union's Artificial Intelligence Act (AI Act) is facing delays in the development of Harmonized Standards, which are crucial for clarifying requirements for AI systems. These standards, being developed by European Standardization Organizations (ESOs), are expected to provide guidance on risk assessment, cybersecurity, and other aspects of AI systems.
Given the complexity and scope of the task, it's likely that the deadline for developing these standards will be pushed back to the end of 2025, rather than the initial target of April 30, 2025. The AI Act sets out numerous vague requirements for AI systems, including risk assessments and cybersecurity specifications, which pose compliance challenges for companies.
Companies complying with these standards will benefit from a "presumption of conformity," meaning they will be presumed to comply with certain elements of the AI Act unless there's evidence of nonconformity. The AI Act will be fully applicable by August 2, 2026, with some provisions applying sooner.
The ESOs are reviewing existing standards that could support compliance with the AI Act and using them as a basis to develop new Harmonized Standards. A Code of Practice is also being developed to detail out the rules for general-purpose AI models, which will become applicable on August 2, 2025. High-risk AI systems will have an extended transition period until August 2, 2027, to comply with the requirements.