Skip to main content

Features

BIOPHARMACEUTICALS

by Alexis Elmore

Global biopharmaceutical companies look to leverage AI tools to streamline safe and effective drug development.
Photo: Getty Images

Will AI Help Save Lives?

Many biopharmaceutical manufacturers are exploring how generative artificial intelligence (AI) advancements can bring their respective operations to new levels. A December 2024 paper by Kampanart Huanbutta and co-authors published in the European Journal of Pharmaceutical Sciences reported, “Over a thousand research articles and reviews have been published in the last five years on the use of AI in pharmaceutical applications.” The FDA in January reported that its Center for Drug Evaluation and Research (CDER) “has seen a significant increase in the number of drug application submissions using AI components over the past few years.”

In the past two years, global life sciences companies Amgen and Moderna have announced how they will be moving forward with AI integration. Moderna’s partnership with OpenAI to create its own version of ChatGPT, named mChat, led to the later development of ChatGPT Enterprise. Since 2023, the company has seen an over 80% internal adoption since its launch, introducing capabilities like advanced analytics, image generation and GPTs (Generative Pre-trained Transformers) which are now embedded throughout Moderna’s business functions. The AI tools act as assistants to the company’s researchers to address complex challenges in developing mRNA medicines for patients.

In 2024, Moderna reported deployment of over 750 GPTs across its operations that aid in driving automation and productivity. For example, the Dose ID GPT uses an advanced data analytics feature from ChatGPT Enterprise to help clinical study teams calculate a product’s optimal vaccine dose by applying standard dose selection criteria and principles. The AI tool provides researchers with a rationale and source references and generates informative charts to illustrate its key findings. Moderna’s team thus can provide a detailed review of a vaccine dose profile with the aid of AI’s input before a product moves into further development in late-stage clinical trials.

Knowing that protein drug development is a long, challenging and costly process, Amgen is pursuing generative biology, which combines AI and machine learning (ML) with innovations in biology and its lab to make medicines more quickly and effectively. Data collected from a protein’s sequence, structure and function are being used by Amgen to train ML algorithms to design drug candidates, which are then evaluated by automated, high-throughput platforms in a lab to provide more data to enhance the ML models. This innovation has allowed computer models to identify complex patterns in protein sequences, generate new protein designs and predict how a protein or antibody drug will behave in the body much earlier in the drug development process. To obtain more effective ML models, Amgen is looking into federated learning, a data-sharing model that helps protect companies’ proprietary information while pooling global protein research data.

“AI agents, AI instruments and AI robots will help address the $3 trillion of operations dedicated to supporting industry growth and create an AI factory opportunity in the hundreds of billions of dollars.”

NVIDIA, announcing new life sciences and health care partnerships to accelerate drug discovery, enhance genomic research and pioneer advanced health care services, January 13, 2025

Global law firm Arnold & Porter conducted a survey of 100 senior executives and department heads from biopharmaceutical, digital health, diagnostics and medical device companies for its “The Convergence of Life Sciences and Artificial Intelligence: Seizing Opportunities While Managing Risk” report. The report notes that AI adoption is reaching new heights in areas such as product discovery and development, while detailing risks with regard to data privacy, cybersecurity and intellectual property. Arnold & Porter’s Global Life Sciences Industry Partner and Chair Dan Kracov shares insight with Site Selection about the report’s key findings.

Site Selection: Of the companies surveyed, was it at all surprising to see that the majority are just now beginning to make plans to integrate AI into their operations?

Kracov: Not really, I think for two reasons. There’s so many new AI tools that are being shopped or marketed to them that they really need time to figure out “How do we integrate these types of tools into our systems? How do we govern them within the company?” It’s kind of natural that they would start on early-stage discovery as one of the primary areas and early-stage development rather than commercialization. It’s the most regulated part of the process of drug development, so it’s faster to integrate it into the discovery and early preclinical stages.

R&D is a top choice for companies looking to explore use of AI applications. With so much “gray area” prevalent, how would you weigh the ethical nature of its use in this regard?

Kracov: There are certainly going to be ethical issues. A lot of them pertain to issues such as privacy and influencing clinical decision-making, particularly in the use of digital tools in the marketing of drugs and so forth. Companies in this industry deal with life-or-death and ethical issues all the time. They operate under good clinical practices in the highly regulated realm, particularly when it comes to clinical development. I think a lot of the controls are in place to engage with AI ethically, but there has to be governance to understand the AI.

Patients will likely receive diagnoses through AI imaging and digital tools rather than traditional practices. This is not without risk. From a legal standpoint, what will be the biggest challenge for companies associated with this level of innovation?

Kracov: One of the things that’s important for manufacturers is maintaining the learned intermediary between them and the patient. While they want to be able to interact with patients, ultimately it’s important that the health care practitioner [HCP] is in the middle. The line that the industry needs to walk is ensuring they’re not accused of unduly influencing the clinical decision because of the AI tools they may be using. Anytime you’re dealing with HPCs and trying to give them information or help them make clinical decisions you need to be extremely careful and make sure the physician understands the technology and not taking the learned intermediary out of the loop. Because from a liability perspective that could be very problematic for the industry.

The report reveals a lack of consensus from surveyed companies on ensuring compliance with diverse regulatory and compliance requirements. Is it important for the industry to find a one-size-fits-all strategy in developing and monitoring new AI policies?

Kracov: It tends to evolve. Over time, companies adopt different approaches and then as there is enforcement and a better understanding of technology, the types of polices and companies tend to converge kind of naturally. There are lessons learned for the industry from what the government says, the enforcement activity or their experience with the technology. It will take time to develop a good governance framework and adapting the compliance program at companies to ensure that the special risks of AI are considered.

On a global scale, what do you see as being the greatest advantage, and on the other hand greatest risk, to implementing AI within life sciences in comparison to other industries Arnold & Porter examines?

“AI holds a lot of promise to be able to discover, develop and bring new treatments to patients faster.”

Dan Kracov, Global Life Sciences Industry Partner and Chair, Arnold & Porter

Kracov: I think the greatest potential advantage is speeding up the development of life-saving therapies. AI holds a lot of promise to be able to discover, develop and bring new treatments to patients faster. Even though AI is expensive, a central focus for all of these companies is how they can bring costs down while getting through the development process much quicker. The risks are really around controlling the AI, so that you understand what it is doing and that it doesn’t affect the development of the product in a way that could create legal liability as well as ethical problems. When it comes to digital tools, companies and the FDA are focused on understanding the limits of AI use as it’s currently configured, and every time you make changes making sure you have analyzed and controlled it appropriately to incorporate new uses or approaches.