Skip to content

Commit

Permalink
examples/clusters/adastra
Browse files Browse the repository at this point in the history
  • Loading branch information
paugier committed Sep 4, 2024
1 parent f1b66c6 commit 2172090
Show file tree
Hide file tree
Showing 4 changed files with 144 additions and 6 deletions.
102 changes: 102 additions & 0 deletions doc/examples/clusters/adastra/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
# Using Fluidsim on Adastra (CINES)

We show in this directory
(<https://foss.heptapod.net/fluiddyn/fluidsim/-/tree/branch/default/doc/examples/clusters/adastra>)
how to use Fluidsim on Adastra. The main documentation for this HPC platform is
[here](https://dci.dci-gitlab.cines.fr/webextranet/index.html). We use modules produced
by [Spack](https://spack.io/).

## Get a login and setup ssh

Get an account on <https://www.edari.fr/>.

Set the alias

```sh
alias sshadastra='ssh -X augier@adastra.cines.fr'
```

## Setup Mercurial and clone fluidsim

Ask authorization to be able to clone the Fluidsim repository from
<https://foss.heptapod.net> as explained
[here](https://dci.dci-gitlab.cines.fr/webextranet/data_storage_and_transfers/index.html#authorizing-an-outbound-connection).

Install and setup Mercurial as explained
[here](https://fluidhowto.readthedocs.io/en/latest/mercurial/install-setup.html). Clone
the Fluidsim repository in `$HOME/dev`.

```{warning}
The file `.bashrc` is not sourced at login so the user should do it
to use pipx-installed applications.
```

```sh
mkdir ~/dev
cd ~/dev
. ~/.bashrc
hg clone https://foss.heptapod.net/fluiddyn/fluidsim
cd ~/dev/fluidsim/doc/examples/clusters/adastra

```

## Setup a virtual environment

Execute the script `setup_venv.sh`.

```sh
./setup_venv.sh
```

```{literalinclude} ./setup_venv.sh
```

Due to a bug in Meson (the build system used by few fluidfft pluggins, see
https://github.com/mesonbuild/meson/pull/13619), we need to complete the installation:

```sh
module purge
module load cpe/23.12
module load craype-x86-genoa
module load PrgEnv-gnu
module load gcc/13.2.0
module load cray-hdf5-parallel cray-fftw
module load cray-python

export LIBRARY_PATH=/opt/cray/pe/fftw/3.3.10.6/x86_genoa/lib
export CFLAGS="-I/opt/cray/pe/fftw/3.3.10.6/x86_genoa/include"

. ~/venv-fluidsim/bin/activate

# --no-build-isolation because of the Meson bug

# because of --no-build-isolation
pip install meson-python ninja fluidfft-builder cython
cd ~/dev
hg clone https://github.com/paugier/meson.git
cd ~/dev/meson
hg up mpi-detection
pip install -e .
cd
#

pip install fluidfft-fftwmpi --no-binary fluidfft-fftwmpi --no-build-isolation --force-reinstall --no-cache-dir --no-deps -v
```

## Install Fluidsim from source

```sh
module purge
module load cpe/23.12
module load craype-x86-genoa
module load PrgEnv-gnu
module load gcc/13.2.0
module load cray-hdf5-parallel cray-fftw
module load cray-python

. ~/venv-fluidsim/bin/activate

cd ~/dev/fluidsim
# update to the wanted commit
pip install . -v -C setup-args=-Dnative=true
```
33 changes: 33 additions & 0 deletions doc/examples/clusters/adastra/setup_venv.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
#!/usr/bin/env bash
set -e

module purge
module load cpe/23.12
module load craype-x86-genoa
module load PrgEnv-gnu
module load gcc/13.2.0
module load cray-hdf5-parallel cray-fftw
module load cray-python

cd $HOME
python -m venv venv-fluidsim
. ~/venv-fluidsim/bin/activate
pip install --upgrade pip

# install fluidsim and all dependencies from wheels!
pip install "fluidsim[fft,test]"

# fix/improve few packages (force recompilation)
pip install fluidfft --no-binary fluidfft -C setup-args=-Dnative=true --force-reinstall --no-cache-dir --no-deps -v

CC=mpicc pip install mpi4py --no-binary mpi4py --force-reinstall --no-cache-dir --no-deps -v
CC="mpicc" HDF5_MPI="ON" pip install h5py --no-binary=h5py --force-reinstall --no-cache-dir --no-deps -v

export LIBRARY_PATH=/opt/cray/pe/fftw/3.3.10.6/x86_genoa/lib
export CFLAGS="-I/opt/cray/pe/fftw/3.3.10.6/x86_genoa/include"
export PYFFTW_LIB_DIR="/opt/cray/pe/fftw/3.3.10.6/x86_genoa/lib"

pip install pyfftw --no-binary pyfftw --force-reinstall --no-cache-dir --no-deps -v

# install fluidfft pluggins
pip install fluidfft-fftw --no-binary fluidfft-fftw --force-reinstall --no-cache-dir --no-deps -v
10 changes: 6 additions & 4 deletions doc/install-clusters.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,21 +16,23 @@ order to run very large simulations is particular since
- Parallelism is done trough MPI with advanced hardware so it's important to use the
right MPI implementation compiled with the right options.

- The software environment is usually quite different than on more standard smaller
machines, with quite old operative systems and particular systems to use other software
(modules, Guix, Spack, ...).
- The software environment is usually quite different than on more standard machines,
with quite old operative systems and particular systems to use other software (modules,
Guix, Spack, ...).

- Computations are launched through a schedulers (like Slurm, OAR, ...) with a launching
script. In the Fluiddyn project, we tend to avoid writting manually the launching
scripts (which is IMHO error prone and slow) and prefer to use the `fluiddyn.clusters`
API, which allows users to launch simulations with simple Python scripts.

We present here few examples of installations on different kinds of clusters:
We present here few examples of installation methods and launching scripts on different
kinds of clusters:

```{toctree}
---
caption: Examples
maxdepth: 1
---
./examples/clusters/adastra/README.md
./examples/clusters/gricad/README.md
```
5 changes: 3 additions & 2 deletions doc/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ If the system has multiple MPI libraries, it is advised to explicitly mention th
MPI command. For instance to use Intel MPI:
```sh
CC=mpiicc pip install mpi4py --no-binary mpi4py
CC=mpicc pip install mpi4py --no-binary mpi4py --force-reinstall --no-cache-dir --no-deps -v
```
````
Expand All @@ -176,7 +176,8 @@ time, this is what you want. However, you can install h5py from source and link
to a hdf5 built with MPI support, as follows:
```bash
CC="mpicc" HDF5_MPI="ON" HDF5_DIR=/path/to/parallel-hdf5 pip install --no-deps --no-binary=h5py h5py
CC="mpicc" HDF5_MPI="ON" HDF5_DIR=/path/to/parallel-hdf5 \
pip install h5py --no-binary=h5py --force-reinstall --no-cache-dir --no-deps -v
python -c 'import h5py; h5py.run_tests()'
```
Expand Down

0 comments on commit 2172090

Please sign in to comment.