Guest Column | May 30, 2024

Sorting Through Winners And Losers Of New Technology In Pharma

By Charlie Wakeham, WakeUp to Quality

High Tech Modern Research Laboratory-GettyImages-1193074142

Innovation and improvement are often perceived as synonymous, but sometimes innovation can be more of a hindrance than a help. A new technology may be the wrong solution for a problem or the right solution at the wrong time; for example, a Sinclair C51 could look quite appealing among the modern lineup of electric cars, scooters, and bicycles but it certainly did not meet any customer need back in the 1980s when it launched.

This article looks at innovation through the use of computerized systems and new technologies in pharmaceutical development and manufacturing. It provides a pragmatic view on the winners and losers with insights about determining whether you need these solutions or should skip the hassle.

Let’s Start At The Beginning

In the beginning, industry and the regulators started out with a bias against computerized systems, with the 1991 version of EU GMP Annex 112 on Computerized Systems stating “Where a computerised system replaces a manual operation, there should be no resultant decrease in product quality or quality assurance. Consideration should be given to the risk of losing aspects of the previous system which could result from reducing the involvement of operators.”

The implication from this Annex 11 extract is that patient safety was well protected by the independent activities of operators and that replacing that human touch with a cold logic machine could be detrimental. In simple terms in the 1990s: humans good, computers potentially bad.

Over time, that paradigm has reversed. Computerized systems are now seen as more reliable and trustworthy, with humans viewed as a source of variability and uncertainty in what should be a stable process. Computerized systems are now inherent in every aspect of bioprocessing, from controlling the seed bioreactors and production perfusion bioreactors through the downstream diafiltration and concentration, sterile filtration and packaging, and, of course, throughout QC testing.

There were a few bumps in the road for computerized systems with the advent of 21 CFR Part 113 Electronic Records; Electronic Signatures in 1997, coincidentally the same year that IBM’s Deep Blue chess-playing computer defeated chess champion Garry Kasparov, thus demonstrating just how far computers had evolved from automated calculators. Once industry detangled itself from the Part 11 regulation, helped by FDA’s retraction of its convoluted and confusing draft Guidance for Industry series on the topic and replacement with the much more usable Guidance for Industry Part 11, Electronic Records; Electronic Signatures — Scope and Application4 document, it resumed the implementation and adoption of innovative computerized solutions.

In time, the irrefutable benefits of knowledgably implemented and competently managed computerized systems — speed, reliability, consistency, availability, and integrity of data to name but a few benefits — cemented computerized systems’ place in our industry. But computerized systems encompass everything from an intelligent instrument to a global enterprise resource planning system and not all the innovations along the way have delivered on the promises. Let’s take a look at some of the winners and losers in advanced pharmaceutical manufacturing technology. We’ll also explore some emerging technology that hasn’t proven its value yet.

RFID’s Supply Chain Sizzle, Pharma Fizzle

In the early 2000s, I was involved in a research project to evaluate the feasibility of using RFID tags to track and manage sterilizing-grade filters used in aseptic processing. In other industries, namely consumer packaged goods, the technology for supply chain management was gaining popularity, and it also led to some cool products. Hasbro used RFID technology to make plastic Star Wars figurines speak when placed on different bases (I’m an engineer and proud to be a geek).

But RFID signals were too easily blocked by packaging, etc., for the idea to succeed for our intended pharmaceutical use cases. Of course, now RFID is ubiquitous and used extensively in the payments industry.

I’m still putting RFID as a failure for life sciences applications.

It's Better In The Cloud

When cloud computing first became a “thing,” it really impinged on the public awareness as an insecure storage location when nude photographs of celebrities were then hacked and leaked to the press.

Not the most auspicious start for an innovation, but over the years, the technology has matured dramatically. With the sophisticated controls available through providers such as AWS and Azure, and the use of infrastructure as code (IaC) for fast and consistent provisioning, cloud-hosted and cloud-native applications now offer significant benefits over on-premises software in terms of availability and disaster recovery.

It’s a win for cloud.

Blockchain’s Slow Burn

Blockchain feels like a solution looking for a problem. The use of distributed ledgers sounds like a panacea for resolving the problem of multiple disparate databases across a complex supply chain.

Indeed, organizations such as the PharmaLedger Association5 have worked enthusiastically for years to convince life sciences companies that blockchain provides superior supply chain traceability for raw materials and distribution of finished drug products, and most recently as a platform for decentralized clinical trials.

Blockchain may still succeed, but it's a slow burn.

Automate Testing And Buy Back The Hours

Automated test tools were the ultimate dream for those of us tasked with validating computerized systems. With the click of a mouse, hours-long test cases would run with no operator intervention. At first, we forgot to account for the time and resource investment needed to:

  • select and implement the automated test tool,
  • document and prove its adequacy, and
  • create those hours-long test cases, which take significantly more time to create than creating and executing manual test cases.

Despite the robust learning curve, automated test tools are a resounding win. Why? Because they give us the ability to consume frequent software updates without a crippling validation burden. The key is to identify which test cases to automate, namely those that will verify commonly used functionality supporting the system’s routine operation for its intended use. These cases can then be run as regression testing as often as needed, typically after a substantial infrastructure change or an application update.

This gives us an approach for continual, repeated verification for little effort after the initial creation. Without automated test tools, we’d be stuck in the dark days of trying to “freeze” a computerized system after validation because the pain of updates was simply too great.

Automated test tools also have become essential to computer software assurance (CSA) approaches, based on the FDA’s eponymous draft guidance.6 CSA is not a technological innovation but rather a reassessment of validation methods with an increased reinforcement on effective risk management throughout a computerized system’s life cycle — from planning through implementation, operation, and ultimately retirement. The patient-centric focus suggested within the CSA guidance for managing computerized systems is expanded and fully explained as a flexible and adaptive framework in the Second Edition of the ISPE GAMP 5 guide A Risk-Based Approach to Compliant GxP Computerized Systems7, which actively supports and encourages the adoption of innovative technologies and critical thinking.

CSA gives us the courage to embrace change, offering a big win to our traditionally very conservative industry.

Digital Validation’s Moot Benefit

Beyond automated testing, there is some hype and debate around digital validation tools. Such tools provide good electronic information management and electronic approval capabilities, but they do not fundamentally change the approach to validation. The same mechanisms may be achieved with commonly used, shared repositories and signature technologies. Digital validation tools do not warrant validation themselves, but they do require basic controls and assessment for adequacy.

Some vendor claims seem somewhat overstated or at least unproven. Enjoy the efficiency gains compared to paper-focused validation deliverables but beware that these tools do not change the validation paradigm in any way.

AI’s Promise And Risk

Artificial intelligence and its subset machine learning (collectively AI/ML) are in every conference, article, and discussion now, and rightly so. Their potential seems limitless, and there are so many areas — drug development, process control, infrastructure management, data analytics, etc. — that could benefit.

The ISPE GAMP community and in particular the GAMP AI Special Interest Group have done sterling work establishing a practical approach to validating and managing AI/ML and there is a dedicated ISPE AI community forming to address the wider use and implications of AI, all of which are helping to move this innovation into the mainstream.

I’m no technophobe, but I worry deeply when I encounter industry professionals promoting the use of generative AI in validation activities. I sat through a particularly painful presentation last year expounding generative AI as the solution to writing test scripts. The AI tool was trained using published guidance (some in breach of copyright), previously created test cases, and the system’s operating manual. Based on the training data used, the tool is highly likely to produce an excellent set of test scripts that will test the accuracy of the operating manual.

  • Will it challenge the intended use of the system based on risk?
  • Will it apply a validation strategy to leverage vendor testing and unscripted testing as well as scripted test cases?
  • Will it use critical thinking?

No to all the above.

I see AI/ML as a win when used intelligently and a failure when used as a cheap substitute for knowledge and experience.

Enabling Innovation

Innovation is not a guarantee of success, but continuing to repeat the same activities and approaches will not bring about continual improvement in our organizations and our industry. Fear of being an early adopter, of taking that leap into the unknown, should not be a barrier to innovation. GAMP 5 Second Edition provides a generic framework of how to approach a controlled implementation of any new computerized system, and the regulators have repeatedly indicated their encouragement for and support of innovation.

Whatever your innovation journey, support is available from like-minded professionals within industry associations such as ISPE, from the vendors and providers themselves, and from subject matter experts, many of whom offer specialist consultancy services in this area. Only you can make the decision to embark on the journey, but you do not need to travel alone.

Conclusion

While in most areas computerized systems have helped industry move forward, sometimes what has been billed as the next miracle innovation has fallen short on its promises. Too often, the discussion is “what can the new technology be used for?”, when in reality the discussion should start with “what is the problem we are trying to solve?”. Better by far to define the problem, evaluate current and innovative technologies, and then use critical thinking to select the solution most likely to resolve the problem.

The modern viewpoint for life sciences: humans good, computers boost their capabilities for most applications with the right controls.

References:

  1. https://en.wikipedia.org/wiki/Sinclair_C5
  2. 2011 version of EudraLex Volume 4 Annex 11 https://health.ec.europa.eu/system/files/2016-11/annex11_01-2011_en_0.pdf
  3. https://www.ecfr.gov/current/title-21/chapter-I/subchapter-A/part-11
  4. https://www.fda.gov/media/75414/download
  5. https://pharmaledger.org/
  6. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/computer-software-assurance-production-and-quality-system-software
  7. https://ispe.org/publications/guidance-documents/gamp-5-guide-2nd-edition

About The Author:

Charlie Wakeham offers consultancy services and training in computerized systems quality and data integrity through her company WakeUp to Quality. She is the current chair of the GAMP Global Steering Committee, leading the 5,000-plus members of the GAMP community of practice and serving on numerous other GAMP and ISPE committees. Her career over 25 years has focused on GxP computerized systems and quality. She received the ISPE Max Seales Yonker Member of the Year Award in 2019 for her GAMP volunteer work and training of regulatory agencies. View her LinkedIn profile.