The regulation of clinical trials involving artificial intelligence (AI) is a complex and rapidly evolving field. In the European Union, the Artificial Intelligence Act (AI Act) has introduced new rules to ensure that AI systems used in clinical trials are safe, transparent, and respectful of fundamental rights.
The AI Act categorizes AI applications into four risk levels: unacceptable, high, limited, and minimal. AI systems used in clinical trials, such as those for medical image analysis and synthetic control arms, are likely to be classified as high-risk. As a result, these AI systems must comply with stringent requirements, including transparency, explainability, data governance, human oversight, accuracy, reliability, and ethical considerations.
The AI Act requires that AI systems be designed to allow for human oversight and review, and that they be validated and documented to ensure their accuracy and reliability. Additionally, AI systems must be continuously monitored to ensure they remain accurate and effective. These requirements aim to ensure that AI systems used in clinical trials are safe, effective, and respectful of participants' rights.
However, the implementation of these requirements will likely pose significant challenges for sponsors, contract research organizations (CROs), and clinical sites. As the regulatory landscape continues to evolve, it is essential for stakeholders to stay informed and adapt to the changing requirements.