The FDA has issued draft guidance on the use of artificial intelligence in drug development, marking a significant step towards regulating AI in the pharmaceutical industry. This guidance provides recommendations on how to use AI to support regulatory decision-making for drug and biological products.
The FDA's goal is to provide a framework for the development and use of AI models that are trustworthy, transparent, and fair. To achieve this, the agency has outlined a risk-based approach that focuses on ensuring the credibility of AI models used in drug development.
Ensuring model credibility is a key aspect of the guidance, with the FDA emphasizing the importance of establishing trust in AI models. The agency also stresses the need for transparency and explainability in AI decision-making, ensuring that stakeholders can understand how AI models arrive at their conclusions.
The FDA is seeking public comment on the draft guidance, with a deadline of April 7, 2025. This guidance is a significant step towards regulating AI in drug development, and the FDA's efforts aim to promote innovation while ensuring safety and efficacy.