fbpx

FDS meshing using the MPI PROCESS: a little guide

Published by aimraish on

One of the main challenges when running a complex FDS analysis is the simulation time. Long simulation time usually depends on the high amount of calculation and available computational capacity allocated.

The dimension of the computational domain, and therefore of the meshes’ cells number, contributes to increase the simulation time. FDS allows you to speed up the calculation process, and properly allocate the amount of work to each CPU, with parallel processing strategies.

To know more about running FDS in parallel using the MPI (Message Passing Interface) and to understand  how to balance the workload, check out the dedicated post available “How to reach good scalability in FDS”. While, this post aims to explain how to allocate each FDS mesh to its MPI Process.

Divide the Domain Evenly

In order to run FDS in parallel using MPI Process, the first step is to subdivide the computational domain into multiple meshes. We explored what are multiple meshes and how to align them in the dedicated post “FDS Mesh Resolution: How to calculate FDS mesh size”.

One way to optimize the simulation time, is to evenly allocate the number of cells, and the calculation involved to a defined number of CPU, in order to distribute evenly the workload. For instance, leaving some CPUs overworked while others have already completed the calculations, would be a misuse of the available computational capacity.

To efficiently use the MPI, divide evenly the domain in multiple meshes. Therefore, balance the number of cells within the FDS meshes allocated to each computer. In detail, you should assign to each MPI process a comparable amount of cell, so that each computer will work more or less the same amount of time, and the workload is evenly distributed. Otherwise, most of the processes will sit idle waiting for the one with the largest number of cells to finish processing each time step.

Assign FDS Meshes to each MPI Process

To use the MPI process in FDS, you need to add the MPI_PROCESS command to the MESH Line. For example if you have six meshes, each with a similar number of cell, you can assign each mesh to a different MPI_process and to a different CPU, as shown as follow:

&MESH ID='mesh1', IJK=..., XB=..., MPI_PROCESS=0 /
&MESH ID='mesh2', IJK=..., XB=..., MPI_PROCESS=1 /
&MESH ID='mesh3', IJK=..., XB=..., MPI_PROCESS=2 /
&MESH ID='mesh4', IJK=..., XB=..., MPI_PROCESS=3 /
&MESH ID='mesh5', IJK=..., XB=..., MPI_PROCESS=4 /
&MESH ID='mesh6', IJK=..., XB=..., MPI_PROCESS=5 /

Therfore:

Mesh 1 is assigned to MPI Process 0
Mesh 2 is assigned to MPI Process 1
Mesh 3 is assigned to MPI Process 2
Mesh 4 is assigned to MPI Process 3
Mesh 5 is assigned to MPI Process 4
Mesh 6 is assigned to MPI Process 5

Assigning multiple FDS meshes to each MPI process

Usually in a MPI calculation, each mesh is assigned to its own process, and each process to its own processor. However, you can assign multiple meshes to one MPI process and to a single processor:

&MESH ID='mesh1', IJK=..., XB=..., MPI_PROCESS=0 /
&MESH ID='mesh2', IJK=..., XB=..., MPI_PROCESS=0 /
&MESH ID='mesh3', IJK=..., XB=..., MPI_PROCESS=1 /
&MESH ID='mesh4', IJK=..., XB=..., MPI_PROCESS=1 /
&MESH ID='mesh5', IJK=..., XB=..., MPI_PROCESS=2 /
&MESH ID='mesh6', IJK=..., XB=..., MPI_PROCESS=2 /

Therore:

Meshes 1 and 2 are assigned to MPI Process 0
Meshes 3 and 4 are assigned to MPI Process 1
Meshes 5 and 6 are assigned to MPI Process 2

Assigning more meshes to the same processor can be useful to save simulation time. Because when the calculation of two meshes are dealt within the same processor, there isn’t need to transfer information from one processor to the other.

In order to have at your disposal enough computational capacity to be able to fun FDS in parallel and allocate the workload to different processors, it is common to rely on a Cloud Computing Service.

Thanks to CLOUD HPC you can use the power of a cluster to run your engineering analyses. The offered service allows you to directly select from the web-app the amount of CUP and RAM needed to run and speed up your simulations.  If interested, you can register here and get 300 free vCPU/hrs.

References

K. Mcgrattan et al, “Fire Dynamics Simulator User’s Guide Sixth Edition,” 201


CloudHPC is a HPC provider to run engineering simulations on the cloud. CloudHPC provides from 1 to 224 vCPUs for each process in several configuration of HPC infrastructure - both multi-thread and multi-core. Current software ranges includes several CAE, CFD, FEA, FEM software among which OpenFOAM, FDS, Blender and several others.

New users benefit of a FREE trial of 300 vCPU/Hours to be used on the platform in order to test the platform, all each features and verify if it is suitable for their needs


Categories: cloudHPC

0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *