Inside LillyPod: Pharma’s Most Powerful AI Supercomputer
Eli Lilly just flipped the switch on a machine that can simulate billions of molecular hypotheses before a single test tube gets touched. LillyPod – inaugurated at a ribbon-cutting ceremony in Indianapolis in February 2026 – is now the most powerful AI supercomputer wholly owned and operated by a pharmaceutical company. Built on NVIDIA’s DGX SuperPOD architecture with 1,016 Blackwell Ultra GPUs, it delivers more than 9,000 petaflops of AI performance and was assembled in roughly four months.
What makes this more than a headline-grabbing hardware flex is the intent behind it. LillyPod isn’t a general-purpose research cluster. It’s a purpose-built AI factory designed to train foundation models for proteins, small molecules, and genomics – and to push AI into clinical trial automation, manufacturing optimization, and beyond. Lilly’s leadership is calling it a 150-year milestone, one that merges the company’s deep proprietary data reserves with computational muscle that was simply unimaginable a generation ago.
But the executives steering this effort are also doing something unusual in the AI hype cycle: they’re setting boundaries on expectations. No one at Lilly is claiming this machine will compress a decade of drug development into three months. The realistic target is cutting the typical 10-year timeline to five years – ambitious, but grounded.
The Hardware Behind LillyPod
LillyPod is the world’s first NVIDIA DGX SuperPOD built with DGX B300 systems. At its core sit 1,016 NVIDIA Blackwell Ultra GPUs, each of which is approximately 7 million times more powerful than the Cray-2 supercomputer Lilly purchased back in 1989. Combined, the full system is 7 billion times more powerful than that historical benchmark.
The numbers are staggering across the board:
| Specification | Detail |
|---|---|
| GPUs | 1,016 NVIDIA Blackwell Ultra (B300) |
| AI Performance | Over 9,000 petaflops |
| GPU Memory | Over 290 terabytes (high-bandwidth) |
| Genomics Data Access | Approximately 700 terabytes |
| Internal Connections | Nearly 5,000 |
| Fiber Cabling | Over 1,000 pounds |
| Physical Footprint | 3,750 sq ft within a 30,000 sq ft facility |
| Assembly Time | Approximately 4 months (announced Oct 2025) |
The networking layer uses NVIDIA Spectrum-X Ethernet, and the entire software stack includes NVIDIA Mission Control for workload orchestration, performance monitoring, and secure automation of AI operations. The system is liquid-cooled, and Lilly has committed to running it on 100% renewable electricity by 2030 as part of the company’s broader carbon-neutrality goals.
Why Lilly Built Its Own AI Factory
There’s a strategic logic to owning the hardware outright rather than renting cloud compute. Lilly sits on decades of proprietary research data – including, critically, data from experiments that failed. This is a distinction that Chief AI Officer Thomas Fuchs has emphasized repeatedly: AI models trained solely on published scientific literature only learn from successes, because failures rarely make it into journals. LillyPod gives Lilly the infrastructure to train models on its full research history, creating what Fuchs describes as a true “co-scientist” AI.
That proprietary data asset is valued at over $1 billion, and it forms the backbone of both internal model training and Lilly’s external-facing TuneLab platform. Owning the compute also means Lilly controls security, compliance, and scheduling for the highly regulated workflows that pharmaceutical AI demands – no third-party cloud provider sitting between the company and its most sensitive data.
Drug Discovery at Computational Scale
Traditionally, even the most productive drug discovery teams can analyze roughly 2,000 molecular ideas per target per year. Each hypothesis requires physical synthesis and testing in a wet lab, creating a hard ceiling on throughput. LillyPod obliterates that ceiling.
The system functions as a massive computational “dry lab” where scientists simulate and evaluate billions of molecular hypotheses in parallel before committing to physical experiments. As Yue Wang Webster, vice president of research and development informatics at Lilly, put it: the supercomputer “breaks the physical limit of the wet lab” so that “billions of molecule ideas” can be tested “at your fingertips.”
Specifically, LillyPod supports the large-scale training of three categories of foundation models:
- Protein diffusion models – for understanding and generating protein structures
- Small-molecule graph neural network models – for exploring chemical possibilities and optimizing lead compounds
- Genomics foundation models – for analyzing cellular and molecular biology at unprecedented scale using 700 terabytes of genomics data
These models don’t replace the wet lab. They dramatically narrow the funnel of candidates that need physical validation, accelerating the earliest and most speculative phases of discovery.
Beyond Discovery: Clinical Trials and Manufacturing
LillyPod’s ambitions extend well past the research bench. Lilly is deploying AI across clinical development and manufacturing operations, targeting specific bottlenecks where automation can shave years off timelines.
In clinical trials, tasks like patient enrollment – historically a slow, manual process that can delay studies by months – are being automated. On the manufacturing side, the company is building digital twins to test and fine-tune supply chains, deploying robotics for production tasks, running real-time quality monitoring, and using AI-driven forecasting to balance supply and demand.
Diogo Rau, Lilly’s executive vice president and chief information and digital officer, has been explicit about the expected impact. Automation across trials and manufacturing could realistically halve the typical drug development timeline from 10 years to five. But he’s equally explicit about the limits: “There’s a tendency to think that we’re now going to be able to discover new medicines in three months,” Rau said. “That’s one that’s particularly damaging.”
The point is worth underscoring. LillyPod accelerates specific steps in the pipeline. It doesn’t magically compress the biology itself. Molecular interactions still need to be validated. Clinical safety still needs to be proven in humans over time. The gains are real but incremental – and that’s exactly how Lilly’s leadership wants them framed.
TuneLab: Opening the Door to External Partners
Not all of LillyPod’s output stays behind Lilly’s walls. Select models trained on the supercomputer are made available through TuneLab, a federated AI and machine learning platform designed for early-stage drug discovery. TuneLab is built on over $1 billion worth of proprietary Lilly research data, making it one of the most data-rich drug discovery platforms available to external biotech companies.
The platform uses NVIDIA FLARE, a federated learning framework that allows participating companies to train models collaboratively while keeping their underlying data private and isolated from other users. As more companies participate, the shared models improve – creating a network effect that benefits the broader biopharma ecosystem without requiring anyone to hand over raw data.
TuneLab also plans to incorporate NVIDIA BioNeMo open foundation models for healthcare and life sciences, making it the first drug discovery platform to offer both proprietary Lilly models and open-source NVIDIA models in one place. Earlier in 2026, Lilly extended TuneLab access to external partners through collaborations with Benchling and Revvity.
The Billion-Dollar Co-Innovation Lab
LillyPod is just one piece of a larger strategic puzzle. Following the supercomputer’s initial announcement in October 2025, Lilly and NVIDIA expanded their collaboration at the January 2026 J.P. Morgan Healthcare Conference with a $1 billion commitment for a new AI co-innovation lab in South San Francisco. The lab is designed to link wet-lab experimentation with large-scale computational modeling, generating high-quality data to train next-generation biology and chemistry foundation models.
Fuchs described the partnership as “a beautiful combination of very orthogonal capabilities and interests,” noting that “NVIDIA is not going to be a medicines company and Lilly will not start producing our own GPUs.” The complementary nature of the relationship – Lilly’s scientific data and domain expertise paired with NVIDIA’s hardware and AI model-building infrastructure – is what makes the collaboration more than a standard vendor arrangement.
Lilly also operates Gateway Labs incubators in San Francisco, Boston, San Diego, Philadelphia, Beijing, and Shanghai, and maintains research collaborations with Indiana University, Purdue University, MIT, and Caltech.
How LillyPod Compares to Alternative Approaches
LillyPod’s wholly owned model stands in contrast to several other approaches gaining traction across the pharmaceutical industry:
| Approach | Key Features | Strengths | Limitations |
|---|---|---|---|
| LillyPod (Wholly Owned) | 1,016 Blackwell GPUs, 9,000+ petaflops, TuneLab access | Full control, $1B+ proprietary data, rapid 4-month build | High upfront investment, pharma-specific scale |
| Public Literature Training | AI models trained on published papers | Broad access, low cost | Misses failed experiments; incomplete picture of biology |
| Cloud/Partnered Compute | Shared infrastructure, open models like BioNeMo | Collaborative, flexible, no data sharing required | Less customization, lacks deep proprietary datasets |
| Legacy Supercomputing | Single-unit, general-purpose systems | Pioneering technology for its era | 7 million times weaker per GPU than modern equivalents |
The critical differentiator for LillyPod isn’t raw compute alone – it’s the marriage of that compute with Lilly’s internal data, including decades of experimental failures that never appear in published literature. This is the dataset that competitors training on public sources simply cannot replicate.
Internal AI Tools and Workforce Impact
Beyond headline applications in drug discovery and clinical trials, LillyPod powers a growing ecosystem of internal AI tools. Lilly employees can use the platform to build chatbots, agentic research workflows, and research lab agents without building infrastructure from scratch. These tools are designed to embed AI into daily R&D workflows rather than treating it as a separate, specialized function.
The agentic workflow capability is particularly notable. Rather than requiring scientists to manually query databases or run analyses, AI agents can autonomously execute multi-step research tasks – pulling data, running simulations, and surfacing insights with minimal human intervention. It’s the kind of capability that transforms a supercomputer from a batch-processing machine into an active research collaborator.
What LillyPod Means for the Future of Pharma
LillyPod represents a broader industry shift toward vertical integration of AI infrastructure within large pharmaceutical companies. It’s not happening in isolation. In January 2026, Thermo Fisher announced plans to incorporate NVIDIA’s BioNeMo platform into its laboratory technologies. GSK and Noetik launched a licensing structure for virtual cell foundation models. Pfizer and Boltz began refining foundation models on Pfizer’s historical data with exclusive ownership of outputs. Schrödinger added Lilly’s own AI models into its LiveDesign platform.
The trend is clear: major pharma companies are moving from experimenting with AI to building permanent, large-scale AI infrastructure as core operational assets. LillyPod is the most visible example to date – the largest, most powerful, and most deliberately integrated into a single company’s end-to-end pharmaceutical workflow.
Whether it delivers on the promise of halving drug development timelines remains to be seen. But the infrastructure is live, the models are training, and the realistic expectations set by Lilly’s leadership suggest a company that understands both the power and the limits of what it has built. As Tim Coleman, Lilly’s CTO, put it: “We believe that computation is foundational to science and that Lilly patients deserve every advantage that we can give them.”