Guest Column | November 18, 2025

AI Is Not The CMC Revolution You're Looking For, And That's OK

By Arnaud Deladeriere, Ph.D., Cell&Gene Consulting Inc.

scientist smiling with thumbs up-GettyImages-847006300

The current cell and gene therapy landscape is very different from the exuberant years of 2020 and 2021. Back then, companies could routinely raise hundreds of millions on preclinical data. The contraction that followed has introduced discipline and maturity.

About the Cell&Gene Foundry

These ideas are shared in collaboration with the Cell&Gene Foundry, an industry group assembled to discuss important topics in cell and gene therapy development, led by Arnaud Delederiere. This conversation included insights from: Irene Rombel, cofounder and CEO of BioCurie, and Ohad Karnieli, founder and CEO of Adva Biotechnology.

To learn more about the Foundry, visit www.cellgeneconsulting.com.

The developers that remain are learning to operate with fewer resources but sharper intent. That scarcity has become a forcing function for efficiency: teams must prioritize, make every experiment count, and ensure that each dollar invested translates into knowledge, candidate advancement, and commercial viability of the resulting product.

This shift has also reframed what “innovation” means. It no longer refers to so-called groundbreaking discovery programs but to how intelligently data and processes are being managed. AI, viewed through this pragmatic lens, becomes less a revolution and more an efficiency and value-creation engine (less sexy, sorry!). It’s a tool that helps organizations make fewer wrong turns, detect inefficiencies early, and learn faster from their own operations. The industry’s earlier abundance of capital encouraged excess (oversized facilities, redundant processes, vast data lakes with little insight). That “race to the finish at any cost” era is over. The leaders shaping CGT2.0 are defined less by how much they raise than by how well they use what they have. Strategic frugality is becoming a competitive advantage.

AI plays directly into this transition. By capturing every datapoint once and learning from it forever, intelligent systems create long-term leverage. They reduce the marginal cost of learning, turning product development into a compounding asset rather than a repeated expense. The companies that thrive will not necessarily be those that scale fastest but those that scale intelligently: grounded in data, guided by expertise, and disciplined in execution.

In its third edition, the Cell & Gene Foundry focused on a narrower, more consequential question: how can AI practically support product development and CMC in cell and gene therapy over the next five years?

From Product Understanding To Process Optimization

Where AI’s influence is most profound is not in replacing scientists but in helping them converge more quickly on the right process and product.

By combining mechanistic understanding (rooted in biology, chemistry, and physics) with computational models that can work with large experimental datapoints, developers can explore process design space with far greater efficiency and depth.

“We didn’t start with a manual process and add AI on top of it. We built the process around data — around sensors, feedback loops, and control. Once you do that, learning becomes part of manufacturing.” – Ohad Karnieli

This capability brings new discipline to defining QTPPs, CQAs, CPPs, and CMAs, linking them more directly to product function and clinical evidence. It allows developers to transition from empirical iteration to evidence-driven optimization, arriving at optimal processes sooner and with fewer costly iterations and changes.

This approach also supports a broader transition in mindset. Instead of adding capacity to manage uncertainty, developers can use data to remove uncertainty itself, making each bioreactor, each batch, and each dollar more productive.

The Central Role Of Biological Expertise

Despite its computational power, AI is only as strong as the expert guiding it. The discussion underscored an emerging truth: data science without biology is as problematic as biology without data. Let’s keep the subject matter expert in the loop!

In practice, the most effective teams are multidisciplinary: biologists who understand what questions matter, paired with engineers and data scientists who know how to structure and synthesize data to answer them. Rather than chasing volume, the focus is shifting toward smarter data generation. Elegant, well-controlled experiments with rich metadata are more valuable than hundreds of uncontextualized runs. The quality of data (not its quantity) determines how well AI models learn and how reliably they predict.

“If you let data scientists build models in isolation, they’ll get beautiful correlations that make no biological sense. You need biologists in the room to define what’s plausible.” – Irene Rombel

Equally important is learning from what fails. In traditional research, failed experiments are often discarded; in an AI-driven framework, they become gold. Negative data help define boundaries, clarify causal factors, and prevent overfitting. By integrating both successful and unsuccessful runs, models grow more robust and biologically grounded.

This is the new discipline of AI-assisted development: thoughtful experimental design, high-fidelity data capture, and interpretation anchored in scientific understanding.

Building On A Foundation Of Data Integrity

No AI system can function reliably without trustworthy data. The conversation highlighted that data integrity is inseparable from product quality.

Integrity begins in the pre-GMP environment, where model training and simulation occur. Here, organizations must establish rigorous internal governance for data acquisition, curation, and traceability (built upon a robust quality management system). As those models transition toward regulated use, validation must extend to the analytical pathways themselves: demonstrating not only that data is accurate but that algorithms analyze them correctly and predictably.

This “analytical integrity” is a regulatory expectation. The FDA’s January 2025 draft guidance on the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products explicitly ties algorithm validation to the same principles as process validation: transparency, traceability, and control. Models must be explainable; black boxes are frowned upon.

Another recurring issue when one tries to implement AI in their CMC strategy is the persistence of operator-dependent steps (think manual thawing, pipetting, or counting) that can introduce 10%–20% variance between runs. In a field where such deviations can render a batch unusable, replacing these fragile operations with closed, automated, sensor-based systems is both a compliance and business necessity.

Ultimately, maintaining data integrity is about building and sustaining confidence. The ability to trust what the data says, without hidden bias, drift, or error, is what allows organizations to act on AI insights with regulatory and scientific credibility.

AI As An Engine For Decision-Making

In most CGT processes today, countless critical decisions (when to harvest, how to feed, whether to continue a run) still depend on human judgment. This dependency introduces inconsistency and cost.

AI offers a route toward greater process reliability by transforming subjective decisions into data-driven ones. Modern culture systems already generate continuous streams of metabolic and process data: oxygen consumption, glucose depletion, pH drift, and temperature variation, among many others. Interpreting these multidimensional signals in real time exceeds what any human operator can do manually. Machine learning models, by contrast, can correlate such data with cell growth dynamics or early signs of exhaustion, detecting subtle deviations long before they appear in a flow cytometry readout.

“Innovation in CMC has always been about control — controlling variability, risk, and cost. AI simply gives us better instruments to do that, provided we keep the guardrails in place.” – Arnaud Deladeriere

The result is faster, more confident decision-making. By shortening feedback loops and enabling predictive adjustments, AI directly compresses process development timelines, an impact that translates at once to financial performance. Speed improves net present value; predictability reduces batch failures. When each run becomes a true learning cycle, efficiency compounds.

The same logic applies at scale. For CDMOs, where scheduling and resource utilization defines profitability, data-driven predictions can refine batch sequencing, just-in-time supply, and material ordering. AI in this sense is an unsung enabler of operational precision.

Conclusion AI in cell and gene therapy may one day help reinvent science, but its first and most immediate contribution is to stabilize and accelerate it. The technology’s true impact lies in the tangible, cumulative improvements it brings to how decisions are made: what data are trusted, how experiments are designed, and when action is taken.

As implementation becomes more widespread, however, guardrails are essential. AI in CMC cannot operate in a vacuum; it must evolve within a framework of scientific, regulatory, and ethical discipline. The very strength of these systems (their ability to learn and adapt) can also amplify errors or biases if not properly governed. Establishing boundaries for acceptable use is therefore as important as developing the algorithms themselves.

The first guardrail is regulatory transparency. Models must be traceable, with clear documentation of how data were collected, curated, and transformed into decisions. This ensures that AI remains auditable and compatible with the principles of GMP and data integrity, an expectation already echoed by regulators in both the U.S. and Europe.

The second is human oversight. Even as automation and intelligence converge, expert supervision must remain embedded in the process. AI can guide decisions but should never be fully autonomous in determining product quality or patient-critical outcomes. Keeping the scientist in the loop preserves accountability and scientific sanity checks.

Third, organizations must commit to continuous validation. Models trained on one data set will inevitably drift as inputs or process conditions change. Treating AI as a living system, one that requires periodic requalification and recalibration, prevents gradual erosion of reliability over time.

Finally, the industry must agree on data ethics and sharing standards. AI systems thrive on diverse data sets, but privacy, ownership, and confidentiality must be respected. Secure, federated approaches can allow learning across sites and companies without compromising proprietary or patient information.

The Foundry’s discussion points to a future where AI is not a separate domain but a shared discipline: a language that unites process engineers, biologists, and data scientists around common goals — reproducibility, efficiency, and patient access.

The CGT industry does not need more promises of revolution. What it needs, now more than ever, is clarity, consistency, and confidence in the manufacturing of highly complex medicines, as well as robust guardrails to ensure that as we accelerate, we do not lose control of the steering wheel. That, AI can deliver today.

Key Takeaways

1. AI thrives where variability lives.

The highest return on AI comes from the messiest parts of the process — steps with biological noise, operator dependency, or hidden correlations. Using AI where human intuition struggles will bring clarity to the most unpredictable stages of manufacturing.

2. Data integrity is now a CMC asset.

Reliable, well-structured data are as critical as clean reagents. Building validated, traceable data sets enables trustworthy models and accelerates regulatory acceptance of AI-driven decisions.

3. Biology must shape the algorithm.

Models built in isolation from biological understanding risk false precision. Integrating subject matter expertise ensures AI outputs are biologically coherent and clinically relevant.

4. Automation is evolving toward intelligence.

When sensors, feedback loops, and analytics converge, manufacturing systems begin to learn. These adaptive platforms reduce operator dependency and turn every batch into a source of continuous improvement.

5. Efficiency is becoming system-level intelligence.

The next competitive advantage will come from connecting processes, data, and decisions into a learning ecosystem. AI turns efficiency from a cost-cutting measure into a continuous optimization loop across the entire product life cycle.

About The Author:

Arnaud Deladeriere, Ph.D., is principal consultant at Cell&Gene Consulting Inc. Previously, he was head of MSAT and Manufacturing at Triumvira Immunologics, and before that, manufacturing manager at C3i. He received his Ph.D. in biochemistry from the University of Cambridge.