Pushing the Boundaries: CloudHPC’s Journey at the OpenFOAM Workshop 2025 HPC Challenge in Vienna!

Published by rupole1185 on

The world of Computational Fluid Dynamics (CFD) is constantly pushing the limits of technology, and at its heart lies High-Performance Computing (HPC). To truly innovate and solve complex engineering problems, we need robust, efficient, and scalable computing infrastructure.

That’s why the HPC Challenge at the OpenFOAM Workshop 2025 in Vienna was such an exciting event, and why we at CloudHPC were thrilled to be a part of it!

The Challenge: OpenFOAM Unleashed on HPC

The HPC Challenge is designed to push the boundaries of OpenFOAM on cutting-edge HPC platforms. Participants compete across various tracks, showcasing innovations in software optimization, cloud deployment, and crucially, hardware performance.

This year’s challenge centered around a demanding test case: a large-scale DrivAer vehicle simulation. With a massive 180 million cell mesh and a transient DES (Detached Eddy Simulation) setup, this wasn’t just a benchmark – it was a real-world test for systems under extreme load. The primary metric? Wall-clock time per timestep/iteration per core, aiming for the lowest possible value to indicate the most efficient hardware.

Here’s a textual representation of the data for clarity:

Wall-cloud time per timestep/iteration per core [s] (Lower is Better):

  • AMD EPYC™ 7B13 3.5 GHz [Milan]: ~0.32 s
  • AMD EPYC™ 9B14 3.7 GHz [Genoa]: ~0.29 s
  • Intel Xeon Platinum 8581C Processor 4.0 GHz: ~0.165 s
  • AMD EPYC™ 9B45 4.1 GHz [Turin]: ~0.19 s
  • AMD EPYC™ 7B13 3.5 GHz [scotch] [Milan]: ~0.345 s (Likely a specific system/config affecting performance)

CloudHPC’s Hardware Track Deep Dive

At CloudHPC, our focus was squarely on the Hardware Track. We believe that maximizing OpenFOAM’s potential starts with optimizing the underlying infrastructure. We brought a range of top-tier CPU architectures to the test, pushing them to their limits to identify the most efficient compute resources for CFD workloads.

Our participation yielded fascinating insights into how different CPU generations and vendors perform on an OpenFOAM DrivAer simulation:

The Key Takeaways from Our Benchmarks:

  1. Intel Makes a Strong Showing: The Intel Xeon Platinum 8581C Processor 4.0 GHz emerged as a clear leader in this benchmark, achieving the lowest wall-cloud time per timestep/iteration per core at approximately 0.165 seconds. This indicates exceptional per-core efficiency for OpenFOAM’s demanding computations.
  2. AMD EPYC “Turin” Impresses: Hot on Intel’s heels was the AMD EPYC™ 9B45 4.1 GHz [Turin], delivering a highly competitive performance of around 0.19 seconds. This demonstrates that AMD’s latest generation of EPYC processors are formidable contenders and highly optimized for complex CFD tasks.
  3. Generational Leaps Are Real: We observed significant performance improvements across AMD’s EPYC generations. While the older EPYC 7B13 [Milan] series showed times around 0.32-0.345 seconds, the newer EPYC 9B14 [Genoa] reduced this to ~0.29 seconds, and the cutting-edge EPYC 9B45 [Turin] nearly halved the time of the Milan generation, bringing it within striking distance of the top-performing Intel chip. This truly highlights the rapid pace of innovation in server CPUs.
  4. Configuration Matters (the “scotch” anomaly): The slightly higher time for the “AMD EPYC™ 7B13 3.5 GHz [scotch] [Milan]” compared to the regular Milan entry underscores that CPU architecture is only one piece of the puzzle. System configuration, memory speed, interconnects, and specific software optimizations can significantly impact real-world performance, even for the same CPU model.

What This Means for Your CFD Simulations

These results provide invaluable data for anyone running OpenFOAM simulations at scale:

  • Hardware Choice is Critical: Selecting the right CPU infrastructure can literally halve your simulation time and costs per core, especially for large, transient cases like the DrivAer.
  • Newer Generations Deliver: Investing in the latest CPU architectures from both Intel and AMD offers substantial performance gains, justifying the upgrade for heavy CFD users.
  • It’s Not Just About Clock Speed: While clock speed plays a role, underlying architectural improvements, cache design, and memory bandwidth are equally (if not more) important for OpenFOAM’s memory-intensive operations.

We are incredibly proud of our performance in the HPC Challenge and the insights gained. At CloudHPC, we are committed to providing the most efficient and powerful cloud infrastructure for your CFD needs. Our participation in events like the OpenFOAM Workshop HPC Challenge allows us to continuously benchmark, optimize, and offer you the best possible computing environment for your simulations.

Did you participate in the challenge? What were your findings? We’d love to hear your thoughts in the comments below!

To know more about CPU infrastructure capabilities and features read: Beyond Raw Speed: Why Your CPU’s “Memory Neighborhood” (Especially LLC) is Crucial for OpenFOAM and FDS


CloudHPC is a HPC provider to run engineering simulations on the cloud. CloudHPC provides from 1 to 224 vCPUs for each process in several configuration of HPC infrastructure - both multi-thread and multi-core. Current software ranges includes several CAE, CFD, FEA, FEM software among which OpenFOAM, FDS, Blender and several others.

New users benefit of a FREE trial of 300 vCPU/Hours to be used on the platform in order to test the platform, all each features and verify if it is suitable for their needs


Categories: OpenFOAM