For years, Sara worked in a hospital, making care possible for others. Now, after being diagnosed with breast cancer, she was the one in need of care.
Today’s pharmacovigilance (PV) teams handle more data than ever, faster than ever — and as volumes increase, complexity also grows. To make the best use of this data deluge, teams are turning to artificial intelligence (AI). AI is a natural fit for PV activities. First, it creates efficiencies by streamlining the high-repetition, labor-intensive tasks common to PV, making massive amounts of data more manageable and actionable. Through that work, AI technologies are also improving the quality of PV outcomes. Aided by AI, human decision-makers can detect safety signals earlier by analyzing far more data than could ever be handled manually.
Moving beyond automation
PV case processing, by necessity, requires extensive data entry. To report an adverse event (AE), PV staff must research the event, code it correctly, and convert the report into the various unique formats required by each regulatory agency.
This is essential but time-intensive work. We will always need human reviewers to make decisions and verify accuracy. Still, the need for additional quality reviews and subsequent intervention can be significantly reduced if we handle data more accurately from the outset. And that is where AI excels.
AE data comes from multiple sources. While some of it is structured, much of it arrives in narrative formats: comments from patient-care hotlines, physicians’ notes, and patient self-reporting submitted via online forms. Traditional modes of automation don’t work well with unstructured data, including colloquialisms, hyperbole, misspellings, or other difficult-to-interpret aspects of human communication. But this is where natural language processing (NLP) and machine learning shine.
In machine learning, computers recognize patterns and make predictions without specific programming to do so. And NLP, a domain of AI, allows computers to decode, contextualize, and interact with human language. NLP can be taught to understand, assess, and accurately structure unstructured data by using machine learning algorithms. And the more it interacts with your data, the more sophisticated its understanding becomes.
For example, consider a patient’s self-submitted report. In manual case processing, a human reviewer must interpret the notes and code them accurately. This takes time, especially if the report contains misspellings or describes an event for which terminology varies. AI, however, can use lemmatization, lexical expansion, and word embeddings to learn to read for those possibilities, doing so faster and more consistently than a person could. Machine learning can even distinguish between shades of meaning when trained against nuanced data in a given therapeutic or product area. So, if a patient writes that a drug made her skin itchy, AI can learn that “itchiness” means a mild irritation, not a severe rash, and code with the correct MedDRA term and grade. And machine learning best acquires knowledge when working with large amounts of data — so data-rich PV activities help AI abilities progress quickly.
Opportunities to innovate
Let’s look more closely at the value of AI for a subset of case processing: literature review. In lit review, AI takes PV teams beyond simple automation. Rather than simply searching specific terms, NLP and machine learning apps can learn to recognize the relationships between drugs and documented AEs and discern their relevance.
Medical literature review is one of the most time-intensive tasks of the safety process. The semi-quantitative analysis required to explore potential safety events may involve numerous sources across dozens of narratives, FDA drug labels, and regulatory documents, among others — all the credible documentation reviewers can find. Supported by NLP-based solutions, reviewers can screen citations and other sources for relevant information. To prioritize the most critical cases, AI can assess the likelihood of a safety event in any citation. It can automatically order full-text articles and translations of the most relevant articles, saving time and costs. At Parexel, we also build in centralized quality checks to ensure every applicable publication is identified and reviewed.
Because NLP can understand which areas of an article are pertinent to a specific AE or product, it points human reviewers directly to applicable information, which dramatically reduces the cost of human capital in time-intensive efforts. For the downstream expectedness assessments as part of medical review, we’ve set a minimum target of 20% time savings with 100 percent accuracy, depending on complexity.
But the benefits go beyond cost savings because NLP and machine learning also improve the quality of literature review end-to-end. Moving on from literature review, let’s look at some challenges of signal detection.
The first challenge of signal detection: how to process enormous volumes of data. Many PV teams tackle that through disproportionality analysis. Algorithms flag possible signals when a specific drug and a specific clinical event are linked and disproportionately represented in Spontaneous Reporting Systems (SRS) databases. However, this approach only works for structured data.
Using NLP and machine learning, computers can be taught to detect trends and correlate them with specified keywords and phrases. AI makes much better use of unstructured Observational Health Data (OHD), including electronic health records and patient registries, and allows human reviewers to analyze greater volumes of data from a broader range of sources.
AI can also learn to distinguish among signals by comparing them to underlying reported cases. In the instance of a drug-combination event, AI can determine whether this interaction is unknown — and, if so, prioritize it. Similarly, it can learn to detect whether a signal is an expected interaction for a specified population.
For lack of a better alternative, PV teams have relied on signal prioritization, focusing first (and sometimes primarily) on signals representing the most significant healthcare impact. But by helping cut through signal noise and reducing false positives, AI empowers PV teams to focus on the most appropriate data sets and evaluate a far greater number of probable signals.
And while there will always be a critical need for human expertise and decision-making, AI is helping to standardize signal detection by making the work less subjective. As a result, drug sponsors receive stronger signals sooner, enabling earlier detection and intervention.
Tap into the power of AI through FSP partnership
The power of AI is in the knowledge it is able to accumulate. That knowledge, which is specific to your organization, grows over time. It becomes more sophisticated and more valuable as AI continues its work.
Because AI focuses on cumulative gains, we recommend it for long-term collaborations like FSP relationships and strategic partnerships. In the context of an extended partnership, we can harness AI’s full potential, ensuring the best possible return on investment.
It’s easy to assume that getting good results from AI requires just one good model — an algorithm you can put in place and step away from. While a model might produce stellar results in a lab setting for a single step of the PV workflow, no one algorithm will apply to every data set. Making a model work in the real world will require additional engineering, workflow customization, extensive data cleaning, and more.
That’s why, instead of relying on single models, we have developed an architecture to build and iterate on many models quickly. Our model architecture allows for rapid evolution. It also adjusts for a range of considerations including human interpretability, reduction of end-to-end false negatives, prioritization, integration with other systems, and the ability to suggest mappings to controlled vocabularies (such as MedDRA). And as a partnership progresses, these models can be even more finely tuned.
We also know our systems will be used and updated by physicians and nurses — not data scientists. This is why we design and implement machine learning-based systems for real world use within your organization with your input. Our team, which includes medical professionals with clinical research experience, will partner with your team to configure systems and create custom workflows.
Said simply, AI is not a quick fix. our approach is to design solutions that have the flexibility to learn and evolve through ongoing engagements — and for long-term gains. Regardless of case volumes, the complexity of your workflows, or the breadth of your portfolio, AI is a powerful tool that can work for you, strengthening and streamlining your PV program.
Generative AI and how we can harness its power in clinical development
Sep 26, 2023
RBQM Podcast Series | Episode 2: Cultivating Risk-based Behaviors
Mar 18, 2022
PBPK modeling solutions as a potential risk mitigation strategy for pH dependent DDIs
Mar 15, 2021
RBQM is our future: Using holistic risk-based oversight, we can focus on a trial’s most critical factors
Jun 18, 2021
Tencent-Backed Wedoctor Is Going Public: What Are the Next Steps for Chinese Digital Health Companies?
Jun 30, 2021
Evidence Generation Strategy under Germany’s Digital Healthcare: Is More Better?
Aug 6, 2021
Highlights from ISMPP Asia Pacific Meeting 2021
Sep 20, 2021
Capitalizing on sensors in clinical trials
Sep 20, 2021
Actigraphy advances a patient-first approach: Reduce patient burden — and make the most of valuable data
Oct 15, 2021
Clinical pharmacology, modeling and simulation to support FIH study design
Oct 21, 2021
CNS Summit Recap: The Future is Collaborative
Nov 22, 2021
How biotechs can strengthen their value story with advanced analytics
Feb 15, 2022