Massively Parallel Particle Hydrodynamics for Engineering and Astrophysics

Lead Research Organisation: Durham University
Department Name: Physics


SPH (smoothed particle hydrodynamics), and Lagrangian approaches to hydrodynamics in general, are a powerful approach to hydrodynamics problems. In this scheme, the fluid is represented by a large number of particles, moving with the flow. The scheme does not require a predefined grid making it very suitable for tracking flows with moving boundaries, particularly flows with free surfaces, and problems that involve flows with physically active elements or large dynamic range. The range of applications of the method is growing rapidly and is being adopted by a rapidly growing range of commercial companies including Airbus, Unilever, Shell, EDF, Michelin and Renault.

The widespread use of SPH, and its potential for adoption across a wide range of science domains, make it a priority use case for the Excalibur project. Massively parallel simulations with billion to hundreds of billions of particles have the potential for revolutionising our understanding of the Universe and will empower engineering applications of unprecedented scale, ranging from the end-to-end simulation of transients (such as a bird strike) in jet engines to the simulation of tsunami waves over-running a series of defensive walls.

The working group will identify a path to the exascale computing challenge. The group has expertise across both Engineering and Astrophysics allowing us to develop an approach that satisfies the needs of a wide community. The group will start from two recent codes that already highlight the key issues and will act as the working group's starting point.

- SWIFT (SPH with Interdependent Fine-grained Tasking) implements a cutting-edge approach to task-based parallelism. Breaking the problem into a series of inter-dependent tasks allows for great flexibility in scheduling, and allows communication tasks to be entirely overlapped with communication. The code uses a timestep hierarchy to focus computational effort where is most need in response to the problems.

- DualSPHysics draws its speed from effective use of GPU accelerators to execute the SPH operations on large groups of identical particles. This allows the code to gain from exceptional parallel execution. The challenge is to effectively connect multiple GPUs across large numbers of inter-connected computing nodes.

The working group will build on these codes to identify the optimal approach to massively parallel execution on exa-scale systems. The project will benefit from close connections to the Excalibur Hardware Pilot working group in Durham, driving the co-design of code and hardware. The particular challenges that we will address are:

- Optimal algorithms for Exascale performance. In particular, we will address the best approaches to the adaptive time-stepping and out-of-time integration, and adaptive domain decomposition. The first allows different spatial regions to be integrated forward in time optimally, the second allows the regions to be optimally distributed over the hardware.

- Modularisation and Separation of Concerns. Future codes need to be flexible and modularised, so that a separation can be achieved between integration routines, task scheduling and physics modules. This will make the code future-proof and easy to adapt to new science domain requirements and computing hardware.

- CPU/GPU performance optimisation. Next generation hardware will require specific (and possibly novel) techniques to be developed to optimally advance particles in the SPH scheme. We will build on the programming expertise gain in DualSPHysics to allow efficient GPU use across multiple nodes.

- Communication performance optimisation. Separated computational regions need to exchange information at their boundaries. This can be done asynchronously, so that the time-lag of communication does not slow computation. While this has been demonstrated on current systems, the scale of Excalibur will overload current subsystems, and a new solution is needed.

Planned Impact

The impact of this work will be the ability to model complex problems that involve transient, multi-physics flows. The code will result in greater efficiency of solution, and will benefit all HPC systems as well as enabling cutting-edge research at the exascale. Some examples of the commercial problems that could be tackled with the scale and efficiency of computing made possible with this project:
- Interaction of fluid flow with flexible bodies, such as a dispersal of water under a rotating tyre.
- The effects of landslides, or under-sea earthquakes, and their interaction with water bodies.
- Modelling of the sea wave and flood defense systems.
- Atomization of fuel and the end-to-end modelling of engines.
- The behaviour and effect of solid bodies floating in water.

Currently the realism that can be achieved is limited by the efficiency of runs and the numbers of particles that can be handled efficiently. The project outcome will be a proof of concept path to exascale simulation. Our aim is to enable simulations with factors of 1000x more particles (the step from petascale to exascale machines). This will revolutionise the utility of such simulations and allow a much greater increase in complexity. For example, it will become possible to simulate complete coast-line protection systems, including the interaction between defenses in adjacent areas. The commercial and quality of life benefits from such simulations cannot be underestimated. Then mitigation of events such as Tsunami waves cannot be experimentally tested at full scale or complexity.

Dissemination of the results will take place through our existing networks, including the SPHERIC simulation community (via Rogers' lead role). DualSPHysics already has a large community of users (>4000) and is a widely used code in engineering. We will leverage this network to provide input into the Working group. Dissemination outside academia will be greatly assisted by the high-profile results obtained in astrophysics. High impact simulations in cosmology and planetary will be used to attract attention to the possibilities that can be handled by the simulation code. SWIFT has a wide ranging academic network and is already prominent at a wide range of research meetings.

Frenk will lead dissemination activities, leveraging his leadership role in the Royal Society. Near the start of the programme, we will organize a cross-community "new ideas" workshop (promoted through the SPH special interest group (Bower and Rogers lead) to ensure the work packages fit the community's needs. At the end of Phase I, we will organise a "show case" workshop (with the support of DiRAC and promoted through the Royal Society) to demonstrate the capabilities of the emergent code. As well as these very tangible outputs, the aim of the working group is to build a community of domain scientists and software engineers with a deep knowledge of the best approaches to exa-scale computing.

We are working with the following hardware vendors as project partners: Arm, IBM, Intel and Nvidia. Their involvement in the project will allow the working group early access to novel hardware intended for use in exascale machines. The vendors have agreed to provide engineering support to assist us in using the hardware optimally.


10 25 50
publication icon
Bower R (2022) Massively Parallel Particle Hydrodynamics at Exascale in Computing in Science & Engineering

Description This award has identified key bottle necks in the inter-node communication layer of the code. Our work has subsequently allowed us to open up greater communication bandwidth and thus to speed up the code and improve its scaling on Exascale computing systems.
Exploitation Route The methods used by our code can be generally applied to other software that uses the task-based approach to parallelism. We are in the process of applying for funding to do this.
Sectors Aerospace, Defence and Marine,Digital/Communication/Information Technologies (including Software)

Description Exascale Ambition 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Industry/Business
Results and Impact A workshop series looking at the types of simulations that could be undertaken using Exascale computing systems.
Year(s) Of Engagement Activity 2021