Overview
The Living Heart Project is a collaborative initiative between Stanford University, Dassault Systèmes, and UberCloud to create a biophysically detailed, functional model of the human heart using finite-element simulation. The goal: a validated digital twin of the heart that could accelerate cardiovascular research and enable personalized treatment planning.
HPCFLOW (via UberCloud Experiment) provided the HPC cloud infrastructure that made the large-scale simulations possible.
The Challenge
Finite-element models of the human heart are computationally intensive — the Living Heart model contains over 600,000 elements representing cardiac tissue, valves, blood flow, and electrophysiology. Running these simulations at meaningful resolution required:
- Bare metal compute: Virtualization overhead was unacceptable for tightly-coupled MPI workloads
- High-bandwidth, low-latency interconnect: InfiniBand for MPI communication between nodes
- Large parallel jobs: Multi-node runs that required reliable fabric isolation
- On-demand access: Research teams needed to iterate quickly without managing infrastructure
What HPCFLOW Provided
- Bare metal HPC nodes with InfiniBand interconnect via the HPCFLOW platform
- Slurm-based workload management for job scheduling and resource allocation
- Multi-tenant network isolation ensuring research workloads ran in dedicated fabric partitions
- HPC expertise and hardware utilization guidance to optimize cluster configurations for the simulation workloads
Recognition
The Stanford/UberCloud Living Heart collaboration won Cloud HPC Awards from three organizations at SC17 (Supercomputing 2017):
- Intel HPC Innovation Excellence Award
- HPCWire Editors' Choice Award
- Hyperion Research Innovation Excellence Award
Related Work
During the same period, HPCFLOW infrastructure supported UberCloud Experiment #200 — Human Brain Project: HPC infrastructure for personalized non-invasive clinical treatment simulations of schizophrenia and Parkinson's disease, published in HPCWire (October 2018).
Both projects demonstrated HPCFLOW's capability to support rigorous academic research requiring reproducible, high-performance compute at scale — not just commercial workloads.
Technologies
- HPCFLOW bare metal provisioning platform
- InfiniBand high-performance interconnect
- Slurm workload manager
- MPI (parallel computing framework)
- Finite-element simulation workloads (Abaqus/FEA)