The EU's Artificial Intelligence Act has turned into a massive headache due to disagreements over how to regulate AI. At the center of the debate is how to approach upstream AI model makers, with some arguing for stricter rules and others pushing for more flexibility.
The European Parliament wants to regulate foundational models, which are general-purpose AI models used to build specific applications. However, some argue this approach could stifle innovation and hinder European startups' competitiveness. French startup Mistral AI, for instance, believes the Act should focus on product safety, while others think it should also address systemic risks associated with AI.
A tiered approach to regulation, which would put different obligations on different types of AI models, is being debated. Some argue this would ensure compliance and assurance from large-scale foundation models while giving smaller models a lighter burden. The Ada Lovelace Institute also advocates for a tiered approach, believing it would strike a balance between regulation and innovation.
Startups are worried about the additional compliance burden and potential fines of up to €30m or 6% of their total worldwide annual turnover. According to a survey, 73% of venture capitalists expect the AI Act to reduce or significantly reduce the competitiveness of European startups in AI. To mitigate this, the EU is introducing AI regulatory sandboxes, which will allow startups to test their solutions in a controlled environment.
Mistral AI's CEO argues that hard laws on product safety will drive model makers to compete by offering safe and trustworthy models. However, others are concerned that overregulation could hinder innovation and push European startups to other jurisdictions. The debate highlights the challenges of regulating AI and the need for a balanced approach that promotes innovation while ensuring safety and trustworthiness.