Turbocharge Your CFD Optimization: Leveraging Cloud HPC for Turbomachinery Design

Published by Ruggero Poletto on

Computational Fluid Dynamics (CFD) has become indispensable in designing turbomachinery, enabling engineers to analyze complex flow phenomena and optimize components like fans and impellers. Modern CFD algorithms allow for running advanced transient analyses with sophisticated turbulence modeling, such as hybrid RANS/LES, demanding substantial computational resources.

Optimization studies, like the one presented here for a centrifugal fan impeller, require running numerous analyses simultaneously to explore the design space effectively. This necessity drives the demand for a complete High-Performance Computing (HPC) system. The challenge, however, lies in the continuous, rapid evolution of hardware (new CPUs, hybrid CPU-GPU solvers) and methodologies (like Lattice Boltzmann), which constantly modify HPC requirements.

We’re shifting the focus from the complex “democratization of CFD” (simplifying software, which still requires deep CFD know-how for meshing, settings, and result interpretation) to the democratization of HPC. This means providing engineers access to the latest, most powerful hardware and software without requiring internal IT skills to manage a cluster. This is where a Cloud HPC solution like cloudhpc.cloud proves invaluable.


Optimization Steps: A Design of Experiment (DOE) Approach

To optimize the centrifugal fan impeller, a structured methodology known as Design of Experiment (DOE) was employed. DOE is a systematic procedure to investigate a system’s behavior by varying input factors (design shapes, velocities) to determine their relationship with the desired output (performance).

The optimization workflow involved several technical steps:

1. Parametric CAD Modeling

  • A parametric CAD model of the impeller was created.
  • The geometry was fully defined by 12 custom-controlled parameters.

2. Mesh and CFD Setup

The CFD analysis was set up using OpenFOAM v9.

  • Meshing: Automatic meshing was performed using tools like SnappyHexMesh/cfMesh+.
    • The setup included a boundary layer with 8 layers.
    • Wall refinement was set to ensure y+<5.
  • CFD Solver Settings:
    • Turbulence Model: The k-Omega SST (Shear Stress Transport) model was selected, along with Low-Reynolds wall functions for nut/k/omega. The k-Omega SST model is a two-equation Eddy-Viscosity RANS model known for its robustness and good performance in separating flows and near-wall regions, making it suitable for turbomachinery applications.
    • Convective Schemes: Bounded upwind schemes were used.
    • Rotational Model: MRF (Multi Reference Frame) was utilized for the rotational part of the fan.
    • Flow Assumption: Incompressible flow was assumed.
    • Convergence Criteria: 10-3 for pressure and 10-4 for all other variables.

3. Optimization Parameters (POST Processing)

Two primary parameters were defined to be minimized to achieve the target results for head (pressure) at two specific operating points (Point 1 and Point 2):


  • Pressure Difference (ΔP): The absolute sum of the difference between the fan’s computed head and the requested head at both points:

  • Absorbed Power Difference (ΔW): The difference in absorbed power between the two operating points:

The absorbed power (W) was calculated using the formula W=⊤ω, where ⊤ is the torque and ω is the angular velocity.


The Critical Role of Cloud HPC in DOE

The sheer number of simulations required for a proper DOE makes a Cloud HPC service essential.

Massive Parallelization

The full DOE required running 214 CFD analyses (107 models, each with 2 operating conditions).

  • A single analysis took about 2 hours to compute on 48 cores (AMD EPYC ROME 3.1 GHz).
  • In a serial scenario, 214 analyses would take 214 x 2 = 428 hours, or nearly 18 days, on the same hardware.
  • By using the cloudHPC service, which allows running multiple analyses simultaneously (up to 40 jobs in this case), the total computational time was dramatically reduced to approximately 10 hours. This represents a significant reduction in the total design analysis time.

Dedicated Workflow and Accessibility

The process was streamlined using a dedicated workflow called turboApp.

  • Automation: turboApp generated an automatic workflow from the input CAD file (STEP/STL) to a CSV file containing the integral CFD results.
  • Accessibility: The cloudHPC service allows users to manage the cluster from their PC, eliminating the need to install anything locally. Users benefit from up-to-date hardware and software without the IT overhead.

Optimization Results and Cost Efficiency

The optimization proved highly successful:

  • Performance Gain: The DOE application led to an impeller design where the power consumption at Point 1 decreased from 13.8 kW to 11.8 kW, and at Point 2 from 21.5 kW to 16.8 kW. This is a substantial reduction, allowing for a decrease of at least one motor size while maintaining performance within a 10% pressure tolerance.
  • Cost: Despite the massive scale of 214 simulations utilizing 48 cores for 2 hours each, which amounts to 20,500 vCPU hours, the computational cost was highly manageable. At a rate of 0.05 € per vCPU hour, and with an automatic 15% discount for high-use, the total cost was approximately 870 €.

Cloud HPC delivered the effective availability of hardware resources necessary to perform the DOE, making complex turbomachinery optimization studies practical and cost-efficient.


CloudHPC is a HPC provider to run engineering simulations on the cloud. CloudHPC provides from 1 to 224 vCPUs for each process in several configuration of HPC infrastructure - both multi-thread and multi-core. Current software ranges includes several CAE, CFD, FEA, FEM software among which OpenFOAM, FDS, Blender and several others.

New users benefit of a FREE trial of 300 vCPU/Hours to be used on the platform in order to test the platform, all each features and verify if it is suitable for their needs


Categories: cloudHPC