Wider impact of this work
NGI_IT infrastructure has been successfully used to efficiently run parallel CFD simulations in large arteries, but we are only at the beginning of our research roadmap towards patient-specific pilot clinical studies. The performance we have obtained in this first phase are also quite encouraging: compared to 1 CPU, for runs on 4 and 8 nodes we have measured a speed-up of large CFD simulations of 3.0 and 3.9, respectively . Future improvements of our numerical simulations will include setting of fluid-structure interaction (FSI) for blood vessels, a new challenge for the cardiovascular biomechanics research.
Moreover, the extensive range of partially derivate equations (PDE) solvers OpenFOAM offers, from chemical reactions, turbulence, heat transfer, to solid dynamics and electro-magnetism may attract other researchers to use OpenFOAM on the grid.
Link for further information
Haemodynamic forces play a fundamental role in regulating the vascular structure. If different from the physiological range, namely in “disturbed flow” conditions, these factors are implicated in the aetiology of vascular wall disease. Such evidences have led to the need of in vivo characterization of haemodynamics at patient-specific level. However, the introduction of computational fluid dynamics (CFD) in clinical research is confined by the limited availability of systems able to run transient simulations for large domains and for many subjects. Towards this end, we are setting up the OpenFOAM framework for large CFD cases to be run in parallel on the IGI grid infrastructure. OpenFOAM has efficient algorithms to solve the Navier-Stokes equations for compressible and non-Newtonian fluids as blood. Furthermore, it has adaptive mesh refinement and parallelization features required to perform patient-specific big CFD simulations aimed at characterizing flow patterns in large arteries.
Description of the work
The NGI_IT computing infrastructure has been successfully configured  for supporting researchers from the “Mario Negri” Institute to securely access grid resources and to use OpenFOAM (v.2.0.1. and openmpi v.1.5.3), the open-source CFD toolbox library . Preliminarily, the Italian infrastructure has been validated with parallel jobs of very big cylindrical meshes (1,824,200 cells) simulating a transient case of 100 time steps. From a technical point of view, a parallel CFD run is performed in three different steps. Initially, the master node prepares the case by using OpenFOAM’s domain decomposition decomposePar utility. After this operation, processorNN directories for each CPU is created with decomposed mesh, fields, solution controls, model choice and discretisation parameters. Distribution of input data to each slave node is subsequently guaranteed by the grid infrastructure. The CFD case is then solved with icoFoam solver on a per-processor basis, where each CPU node uses its local disk space to store results data in time directories. At the end of the run, results are collected from the slave nodes to the master node, and the resulting tar-ball is saved on a grid Storage Element using user-defined post hooks. Finally, the numerical and graphical post-processing is done locally after using the reconstructPar utility of OpenFOAM to recreate the case to a single CPU.
In the present work, patient-specific, image-based  transient CFD of the carotid bifurcation, with physiological flow wave form as boundary conditions have been processed using the icoFoam solver in parallel. CFD results were then post-processed to calculate haemodynamic wall parameters that predict disturbed flow, athero-prone, sites on the vascular wall like the time averaged wall shear stress (TAWSS), the oscillatory shear index (OSI) and the relative residence time (RRT). Disturbed flow is localized by the wall surface area exposed to low and oscillating wall shear stress having RRT > 1.