Regulatory acceptability of AI: Current perspectives

Regulatory-blog-image_100x100.jpgThis blog is part of The Regulatory Navigator series, where we explore the evolving regulatory landscape with actionable insight from Parexel's experts, sharing their experience to maximize success for clinical development and patient access.

 

Artificial intelligence (AI) is not new: the term itself was first used in the 1950s. There is no doubt, though, that it was the emergence of OpenAI’s ChatGPT in 2022 that delivered AI firmly into the spotlight. As we explore how best to harness this technology in pursuit of drug discovery and development, it’s critical to consider the regulatory position of global health authorities.  

Data published by representatives of Center for Drug Evaluation and Research (CDER) and Center for Biologics Evaluation and Research (CBER)1 in 2023 highlighted the number of submissions received at FDA relating to use of AI in the context of drug review by the US Agency. Submissions including AI or machine learning (ML) components made during the clinical development stage form the vast majority, with 140 in total from 2016 – 2021. Compared with previous years, 2021 was exceptional, accounting for 84% (132) of submissions across all stages in the same period.  

Given the subsequent emergence of large language models (LLMs) and other foundation models (FMs) – handling image, voice, and video – it seems likely that this increase will be sustained (and significantly exceeded) in the years to come. 

Perhaps responding to this evident inflection in AI technology advancement, the world’s major regulatory agencies moved to share their perspectives – and the issues they are thinking deeply about – in papers published in 2023. The European Medicines Agency (EMA) set out its current thinking in a draft reflection paper2, while FDA issued a discussion paper3 setting out a series of questions for stakeholder feedback relating to use of AI and ML across the drug development continuum. 

Learnings on current perspectives

So, what have we learned about the current perspectives of these agencies? Most importantly, it is to seek regulatory guidance whenever the application has the potential to create risk to the patient – and to do so early. 

Acting early is a theme throughout their perspectives. When an AI/ML system is intended for use as part of a program subject to regulatory review, both EMA and FDA strongly encourage sponsors to seek regulatory advice regarding the design process. Acknowledging the many complexities and choices that need to be made, they make clear that it will smooth the subsequent journey if engagement happens early.  They also signal clearly that developers of AI systems will need to document each step very carefully in the process of development, deployment, and in-use monitoring. 

On the other hand, both FDA and EMA also recognize areas of application within drug development where the safety risk (i.e., adverse consequences) to patients is low (or none); meaning that the potential regulatory concern and extent of regulators’ scrutiny will also be proportionately lower. For example, AI-enabled business decision support seems likely to be viewed as acceptable in many, perhaps most, cases. Similarly, implementations of AI-enabled business-efficiency systems supported by well-considered human oversight are likely to be acceptable without undue concern, so long as these form part of a carefully controlled, validated and auditable process. 

Participants in the drug development process think globally and look to regulators to do likewise. Given the current relatively immature state of the regulatory framework to guide the development of AI/ML-based systems, it may be some years before we see formal guidelines specifically relating to this topic issued by the ICH (The International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use).

Despite this, there are helpful signposts that can guide us. Both EMA and FDA have signaled that we can expect an approach to regulation based on an assessment of risk to the patient: the greater the potential patient risk, the greater the level of scrutiny. The EMA and FDA papers also provide insight into the aspects of AI system development that sponsors should expect to be carefully interrogated.. But equally, the papers indicate that there are multiple opportunities for application of AI technologies which will be reviewed with a lighter touch, particularly when supported by well-considered procedures for human oversight.

Beyond the regulators, other industry bodies and medical organizations are already making public statements on what they consider to be a responsible approach to AI, including the Association of Clinical Research Organisations (ACRO)4 (focused on developers of AI solutions, and with a useful appendix of relevant reference sources likely to be of interest to AI developers), and the American Medical Association (AMA)5 (focused on AI to be used in a healthcare setting), among others. Parexel has also published principles6 that guide our AI roadmap, with solutions that are conceived, developed and deployed responsibly.

For a more in-depth review of the agencies’ papers on AI and ML:  

 

References 

1. Liu, Q et al.  Landscape analysis of the application of artificial intelligence and machine learning in regulatory submissions for drug development from 2016 to 2021.  Clin Pharm & Therapeutics; 113(4): April 2023. 

2. Reflection paper on the use of Artificial Intelligence (AI) in the medicinal product lifecycle.  EMA/CHMP/CVMP/83833/2023: July 2023 

3. Using Artificial Intelligence & Machine Learning in the Development of Drug & Biological Products.  Discussion Paper and Request for Feedback.  US Food & Drug Administration: 2023. 

4. ACRO’s Principles for Responsible AI: Dec 2023. 

5. American Medical Association Principles for Augmented Intelligence Development, Deployment, and Use: Nov 2023. 

6. Parexel principles for artificial intelligence (AI): March 2024.

Return to Insights Center

Related Insights

Podcast

RBQM Podcast Series | Episode 3: Staying within the Guardrails: How to Push the Boundaries in a Highly Regulated Industry

Jun 16, 2022

Blog

Leveraging the draft FDA Guidance on PBPK for your drug development program

Feb 24, 2021

Playbook

Are you using real-world evidence?

Feb 1, 2023

Blog

AI Milestones: FDA’s ISTAND program accepts AI-based assessment tool for depression

Mar 19, 2024

Blog

Summary and assessment: Using Artificial Intelligence & Machine Learning in the Development of Drug & Biological Products: Discussion Paper and Request for Feedback (FDA)

Mar 7, 2024

Blog

Summary and assessment of EMA’s reflection paper on the use of artificial intelligence (AI) in the medicinal product lifecycle

Mar 7, 2024

Blog

BIOSECURE Act: Implications for US-based drug developers

Apr 10, 2024

Blog

CNS Summit Recap: The Future is Collaborative

Nov 22, 2021

Video

Creating EU-CTR compliant and patient-friendly lay language summaries (LLS)

Jan 26, 2022

Article

New endpoints for early-stage cancer are gaining regulatory traction

Jan 28, 2022

Article

How biotechs can strengthen their value story with advanced analytics

Feb 15, 2022

Video

On-demand webinar: An expert guide to EU-CTR

Mar 10, 2022