SPH With Inter-dependent Fine-grained Tasking

Public Information

What is SWIFT?

SWIFT is a hydrodynamics and gravity code for astrophysics and cosmology. What does that even mean? It is a computer program designed for running on supercomputers that simulates forces upon matter due to two main things: gravity and hydrodynamics (forces that arise from fluids such as viscosity). The creation and evolution of stars and black holes is also modelled together with the effects they have on their surroundings. This turns out to be quite a complicated problem as we can't build computers large enough to simulate everything down to the level of individual atoms. This implies that we need to re-think the equations that describe the matter components and how they interact with each other. In practice, we must solve the equations that describe these problems numerically, which requires a lot of computing power and fast computer code.

We use SWIFT to run simulations of astrophysical objects, such as planets, galaxies, or even the whole universe. We do this to test theories about what the universe is made of and how it evolved from the Big Bang up to the present day!

Dwarf galaxies orbiting a galaxy similar to our own Milky Way.

A giant impact onto the young planet Uranus.

Why create SWIFT?

We created SWIFT for a number of reasons. The primary reason being that we want to be able to simulate a whole universe! This has been done before successfully (see the EAGLE Project for more details), but that simulation used a software which is not tailored for the newest supercomputers and took almost 50 days on a very large computer to complete. SWIFT aims to remedy that by choosing to parallelise the problem in a different way, by using better algorithms and by having a more modular structure than other codes making it easier for users to pick and choose what physical models they want to include in their simulations. This lets us also study very different topics like the giant impacts of planets colliding in the early solar system.

The way that supercomputers are built is not by having one huge super-fast 'computer', but rather by having lots of regular computers (only a tiny bit better than what is available at home!) that are connected together by high-speed networks. Therefore, the way to speed up your code might not necessarily be to make it 'run faster' on a single machine, but rather enable those machines to talk to each other in a more efficient way. This is how SWIFT is different from other codes that are used in astrophysics for a similar purpose: the focus is on distributing the work to be done (the equations to be solved) in the best possible way across all the small computers that are part of a supercomputer.

Traditionally, you have each 'node' (computer) in the 'cluster' (supercomputer) running the exact same code at the exact same time, and at the end of each bit of the problem they all talk to each other and exchange information. SWIFT does this a little differently, with each node working on different tasks than others as and when those tasks need to be completed. SWIFT also makes the nodes communicate with each others all the time and not only at fixed points, allowing for much more flexibility. This cuts down on the time when a node is sitting and waiting for work, which is just wasted time, electricity, and ultimately money!

One other computer technology that occurred in the last decade is the appearance of so-called vector instructions. These allow one given computing core to process not just one number at a time (as in the past) but up to 16 (or even more on some machines!) in parallel. This means that a given compute core can solve the equations for 16 stars (for instance) at a time and not just one. However, exploiting this capability is hard and requires writing very detailed code. That is rarely done in other codes but our extra efforts pay off and SWIFT can solve the same equations as other software in significantly less time!

What is SPH?

Smoothed Particle Hydrodynamics (SPH) is a numerical method for approximating the forces between fluid elements (gas or liquids). Let's say that we want to simulate some water and a wave within it. Even a single liter of water has 100000000000000000000000000 particles in it. To store that much data we would require a computer that as 100 trillion times as much storage space as all of the data on the internet. It's clear that we need a more efficient way of simulating this water if we are to have any hope!

It turns out that we can represent the water by many fewer particles if we can smooth over the gaps between them efficiently. Smoothed Particle Hydrodynamics is the technique that we use to do that.

SPH was originally developed to solve problems in astrophysics but is now a popular tool in industry with applications that affect our everyday life. Turbines are modelled with this technique to understand how to harvest as much energy from the wind. The method is also used to understand how waves and tsunamis affect the shores, allowing scientists to design effective defences for the population.

Astronomer

Want to get started using SWIFT? Check out the on-boarding guide available here. SWIFT can be used as a drop-in replacement for Gadget-2 and initial conditions in hdf5 format for Gadget can directly be read by SWIFT. The only difference is the parameter file that will need to be adapted for SWIFT.

SWIFT combines multiple numerical methods that are briefly outlined here. The whole art is to efficiently couple them to exploit modern computer architectures.

Gravity

SWIFT uses the Fast Multipole Method (FMM) to calculate gravitational forces between nearby particles. These forces are combined with long-range forces provided by a mesh that captures both the periodic nature of the calculation and the expansion of the simulated universe. SWIFT currently uses a single fixed but time-variable softening length for all the particles.

As well as this self-gravity mode, we also make many useful external potentials available, such as galaxy haloes or stratified boxes that are used in idealised problems.

Gravitational accuracy can be tuned through use of the opening angle and the choice of a multipole order for the short-range gravity calculation. The mesh forces are controlled by the cell size and frequency of the update.

Cosmology

SWIFT implements a standard LCDM cosmology background expansion and solves the equations in a comoving frame. We allow for equations of state of dark-energy that evolve with scale-factor. The structure of the code can easily allow for modified-gravity solvers or self-interacting dark matter schemes to be implemented. These will be part of future releases of the code.

Unlike other cosmological codes, SWIFT does not express quantities in units of the reduced Hubble parameter. This reduces the possible confusion created by this convention when using the data product but requires users to convert their initial conditions (using a specific mode of operation of SWIFT!) when taking them from a different code.

Hydrodynamics Schemes

There are many hydrodynamics schemes implemented in SWIFT, and SWIFT is designed such that it should be simple for users to add their own.

All the schemes can be combined with a time-step limiter inspired by the method of Durier & Dalla Vecchia 2012, which is necessary to ensure energy conservation in simulations that involve sudden injection of energy such as in feedback events.

The four main modes are as follows:

Minimal SPH

In this mode SWIFT uses the simplest energy-conserving SPH scheme that can be written with no viscosity switches nor thermal diffusion terms. It follows exactly the description in the review of the topic by Price 2012 and is not optimised. This mode is used for education purposes or can serve as a basis to help developers create other hydrodynamics schemes.

GADGET-2 SPH

SWIFT contains a 'backwards-compatible' GADGET-2 SPH mode, which uses a standard Monaghan 1977 artificial viscosity scheme with a Balsara switch. Note that the GADGET-2 SPH scheme is implemented to be the same as in the public release of GADGET-2. This is to enable users to use SWIFT as a drop-in replacement for GADGET-2.

Pressure-Entropy SPH

In SWIFT, the Pressure-Entropy and (in the future) Pressure-Energy schemes from Hopkins 2013 are made available for use. These schemes use a weighting factor of either entropy or energy in the calculation of density, which has the effect of promoting mixing and reducing spurious surface tensions that are present in a traditional "Density-Entropy" scheme (such as the GADGET-2 one presented above). This scheme avoids artificial surface tension at contact discontinuities and allows for better mixing between phases. This leads to much better behaviour in cases such at the Kelvin-Helmholtz instabilities or the infamous 'blob' test.

GIZMO (MFM)

SWIFT can also use the GIZMO scheme (Hopkins 2015), also know as 'SPH-ALE' outside of astrophysics. This scheme is a hybrid between a particle method and a finite volume method. Whilst particles are used to represent the fluid, fluxes between them are computed and exchanged using Riemann solvers and proper gradient reconstruction. This allows for a much more accurate representation of the physics without any ad-hoc switches for viscosity or thermal diffusion but also comes at a higher computational cost.

Subgrid models for galaxy formation

SWIFT implements two main models to study galaxy formation. These are available in the public repository and different components (star formation, cooling, feedback, etc.) can be mixed and matched for comparison purposes.

EAGLE model

The EAGLE model of galaxy formation is available in SWIFT. This combines the cooling of gas due to interaction with the UV and X-ray background radiation of Wiersma 2009, the star-formation method of Schaye 2008, the stellar evolution and gas enrichment model of Wiersma 2009, feedback from stars following Dalla Vecchia 2012, super-massive black-hole accretion following Rosas-Guevara 2015 and black-hole feedback following Booth 2009. All these modules have been ported from the Gadget-3 code to SWIFT and will hence behave slightly differently.

GEAR model

The GEAR model is available in SWIFT. This model uses the GRACKLE library for cooling and is one of the many models that are part of the AGORA comparison project.

Structure finder

SWIFT can be linked to the VELOCIraptor phase-space structure finder to return haloes and sub-haloes while the simulation is running. This on-the-fly processing allows for a much faster time-to-science than in the classic method of post-processing simulations after they are run.

Documentation and tests

There is a large amount of background reading material available in the theory directory provided with SWIFT. You will need pdflatex to build this documentation.

SWIFT also provides a large library of hydrodynamical test cases for you to use, the results of which are available on our developer Wiki here.

Computer Scientist

Parallelisation strategy

SWIFT uses a hybrid MPI + threads parallelisation scheme with a modified version of the publicly available lightweight tasking library QuickSched as its backbone. Communications between compute nodes are scheduled by the library itself and use asynchronous calls to MPI to maximise the overlap between communication and computation. The domain decomposition itself is performed by splitting the graph of all the compute tasks, using the METIS library, to minimise the number of required MPI communications. The core calculations in SWIFT use hand-written SIMD intrinsics to process multiple particles in parallel and achieve maximal performance.

Strong and weak scaling

Cosmological simulations are typically very hard to scale to large numbers of cores, due to the fact that information is needed from each of the nodes to perform a given time-step. SWIFT uses smart domain decomposition, vectorisation, and asynchronous communication to provide a 36.7x speedup over the de-facto standard (the publicly available GADGET-2 code) and near-perfect weak scaling even on problems larger than presented in the published astrophysics literature.

SWIFT Scaling Plot The left panel ("Weak Scaling") shows how the run-time of a problem changes when the number of threads is increased proportionally to the number of particles in the system (i.e. a fixed 'load per thread'). The right panel ("Strong Scaling") shows how the run-time changes for a fixed load as it is spread over more threads. The right panel shows the 36.7x speedup that SWIFT offers over GADGET-2. This uses a representative problem -- a snapshot of the EAGLE simulation at late time where the hierarchy of time-steps is very deep and where most other codes struggle to harvest any scaling or performance.

I/O performance

SWIFT uses the parallel-hdf5 library to read and write snapshots efficiently on distributed file systems. Through careful tuning of Lustre parameters, SWIFT can write snapshots at the maximal disk writing speed of a given system.