Scalability performance code_aster Vs calculiX

Published by rupole1185 on

A new benchmark to evaluate performance of two of the most common FEM/FEA solvers is proposed. On the cloudHPC platform it is possible to execute both calculiX and code_aster simulations with the most advanced MPI compilation. None of the two solvers is, in fact, available in multi-core use – also named MPI – and for this the cloudHPC team compiled the solvers in order to give this possibility to all the users of the platform.

Once the MPI versions are available the following question is: is it worth a while using it? How much faster is the MPI version compared to the default hyper-thread one? To reply to this question a simulation using 3D elements in a static non-linear approach is here reported in order to make the users understand what is going on.

The case chosen is a simple chassis with constraints and load static in time. The mesh used consists of approximately 773000 nodes and 650000 tetrahedral elements. The modelling is a first order 3D element and the analysis performed a static analysis with 10 iterations in time (loads are applied progressively at the iterations). For code_aster a MUMPS solver is used.

The simulations performed reports different configuration where the instance used is always a 32 vCPU with standard RAM while the input files were configured differently in order to use threads or Cores.

SCRIPTvCPUTHREADS per CoreCoreRAM [Mb/vCPU]vCPU [h]
calculiX-2.19-PARDISO32321434.17
calculiX-2.18-PARDISO-MPI32216425.73
codeAster-16.4_mpi323214444.15
codeAster-16.4_mpi32844139.79
Scalability test of code_aster and calculiX performed with different number of Cores and Threads

What can be seen is that when using only THREADS the performance of calculiX are remarkable compared to the ones of code_aster: the total simulation time is much lower (90% shorter). On the other hand, code_aster showed a better scalability: when the number of Cores increased from 1 to 4 the simulation time dropped by a factor of 3.2 while in calculiX the speed-up was just 1.36.

Overall, it seems what when performing simulation without the MPI scalability (default solvers) calculiX is the best choice while if we perform FEA analysis using MPI code_aster may become more competitive.

We also want to remark another feature of code_aster which can’t be captured by a scalability test: input code_aster files are python scripts and they allow the users to generate real loops, insert variables, read files, etc. CalculiX input files are instead text files and they do not provide any direct programming capability, making workflow automatization a bit more problematic.

Further scalability test would be required in order to verify the code_aster speed-up up to 16 cores to match the cores used by calculiX and also by applying different algorithm such as the newly introduced PETSc available in code_aster 17.


CloudHPC is a HPC provider to run engineering simulations on the cloud. CloudHPC provides from 1 to 224 vCPUs for each process in several configuration of HPC infrastructure - both multi-thread and multi-core. Current software ranges includes several CAE, CFD, FEA, FEM software among which OpenFOAM, FDS, Blender and several others.

New users benefit of a FREE trial of 300 vCPU/Hours to be used on the platform in order to test the platform, all each features and verify if it is suitable for their needs


Categories: calculixCode Aster

0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *