.. What about MPI? Running SWIFT on more than one node Josh Borrow, 5th April 2018 What about MPI? Running SWIFT on more than one node =================================================== After compilation, you will be left with two binaries. One is called ``swift``, and the other ``swift_mpi``. Current wisdom is to run ``swift`` if you are only using one node (i.e. without any interconnect), and one MPI rank per NUMA region using ``swift_mpi`` for anything larger. You will need some initial conditions in the GADGET-2 HDF5 format (see :ref:`Initial Conditions `) to run SWIFT, as well as a compatible :ref:`yaml parameter file `. SLURM Submission Script Example ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Below is an example submission script for the SLURM batch system. This runs SWIFT with thread pinning, SPH, and self-gravity. .. code-block:: bash #SBATCH --partition= #SBATCH --account-name= #SBATCH --job-name= #SBATCH --nodes= #SBATCH --ntasks-per-node= #SBATCH --cpus-per-task= #SBATCH --output=outFile.out #SBATCH --error=errFile.err ## expected runtime #SBATCH --time-:: ./swift_mpi --pin --threads= \ --hydro --self-gravity \ parameter_file.yml