diff --git a/doc/examples/clusters/adastra/README.md b/doc/examples/clusters/adastra/README.md new file mode 100644 index 00000000..f77429c7 --- /dev/null +++ b/doc/examples/clusters/adastra/README.md @@ -0,0 +1,102 @@ +# Using Fluidsim on Adastra (CINES) + +We show in this directory +() +how to use Fluidsim on Adastra. The main documentation for this HPC platform is +[here](https://dci.dci-gitlab.cines.fr/webextranet/index.html). We use modules produced +by [Spack](https://spack.io/). + +## Get a login and setup ssh + +Get an account on . + +Set the alias + +```sh +alias sshadastra='ssh -X augier@adastra.cines.fr' +``` + +## Setup Mercurial and clone fluidsim + +Ask authorization to be able to clone the Fluidsim repository from + as explained +[here](https://dci.dci-gitlab.cines.fr/webextranet/data_storage_and_transfers/index.html#authorizing-an-outbound-connection). + +Install and setup Mercurial as explained +[here](https://fluidhowto.readthedocs.io/en/latest/mercurial/install-setup.html). Clone +the Fluidsim repository in `$HOME/dev`. + +```{warning} +The file `.bashrc` is not sourced at login so the user should do it +to use pipx-installed applications. +``` + +```sh +mkdir ~/dev +cd ~/dev +. ~/.bashrc +hg clone https://foss.heptapod.net/fluiddyn/fluidsim +cd ~/dev/fluidsim/doc/examples/clusters/adastra + +``` + +## Setup a virtual environment + +Execute the script `setup_venv.sh`. + +```sh +./setup_venv.sh +``` + +```{literalinclude} ./setup_venv.sh +``` + +Due to a bug in Meson (the build system used by few fluidfft pluggins, see +https://github.com/mesonbuild/meson/pull/13619), we need to complete the installation: + +```sh +module purge +module load cpe/23.12 +module load craype-x86-genoa +module load PrgEnv-gnu +module load gcc/13.2.0 +module load cray-hdf5-parallel cray-fftw +module load cray-python + +export LIBRARY_PATH=/opt/cray/pe/fftw/3.3.10.6/x86_genoa/lib +export CFLAGS="-I/opt/cray/pe/fftw/3.3.10.6/x86_genoa/include" + +. ~/venv-fluidsim/bin/activate + +# --no-build-isolation because of the Meson bug + +# because of --no-build-isolation +pip install meson-python ninja fluidfft-builder cython +cd ~/dev +hg clone https://github.com/paugier/meson.git +cd ~/dev/meson +hg up mpi-detection +pip install -e . +cd +# + +pip install fluidfft-fftwmpi --no-binary fluidfft-fftwmpi --no-build-isolation --force-reinstall --no-cache-dir --no-deps -v +``` + +## Install Fluidsim from source + +```sh +module purge +module load cpe/23.12 +module load craype-x86-genoa +module load PrgEnv-gnu +module load gcc/13.2.0 +module load cray-hdf5-parallel cray-fftw +module load cray-python + +. ~/venv-fluidsim/bin/activate + +cd ~/dev/fluidsim +# update to the wanted commit +pip install . -v -C setup-args=-Dnative=true +``` diff --git a/doc/examples/clusters/adastra/setup_venv.sh b/doc/examples/clusters/adastra/setup_venv.sh new file mode 100755 index 00000000..5fd9039b --- /dev/null +++ b/doc/examples/clusters/adastra/setup_venv.sh @@ -0,0 +1,33 @@ +#!/usr/bin/env bash +set -e + +module purge +module load cpe/23.12 +module load craype-x86-genoa +module load PrgEnv-gnu +module load gcc/13.2.0 +module load cray-hdf5-parallel cray-fftw +module load cray-python + +cd $HOME +python -m venv venv-fluidsim +. ~/venv-fluidsim/bin/activate +pip install --upgrade pip + +# install fluidsim and all dependencies from wheels! +pip install "fluidsim[fft,test]" + +# fix/improve few packages (force recompilation) +pip install fluidfft --no-binary fluidfft -C setup-args=-Dnative=true --force-reinstall --no-cache-dir --no-deps -v + +CC=mpicc pip install mpi4py --no-binary mpi4py --force-reinstall --no-cache-dir --no-deps -v +CC="mpicc" HDF5_MPI="ON" pip install h5py --no-binary=h5py --force-reinstall --no-cache-dir --no-deps -v + +export LIBRARY_PATH=/opt/cray/pe/fftw/3.3.10.6/x86_genoa/lib +export CFLAGS="-I/opt/cray/pe/fftw/3.3.10.6/x86_genoa/include" +export PYFFTW_LIB_DIR="/opt/cray/pe/fftw/3.3.10.6/x86_genoa/lib" + +pip install pyfftw --no-binary pyfftw --force-reinstall --no-cache-dir --no-deps -v + +# install fluidfft pluggins +pip install fluidfft-fftw --no-binary fluidfft-fftw --force-reinstall --no-cache-dir --no-deps -v diff --git a/doc/install-clusters.md b/doc/install-clusters.md index 35746b8d..08097fd4 100644 --- a/doc/install-clusters.md +++ b/doc/install-clusters.md @@ -16,21 +16,23 @@ order to run very large simulations is particular since - Parallelism is done trough MPI with advanced hardware so it's important to use the right MPI implementation compiled with the right options. -- The software environment is usually quite different than on more standard smaller - machines, with quite old operative systems and particular systems to use other software - (modules, Guix, Spack, ...). +- The software environment is usually quite different than on more standard machines, + with quite old operative systems and particular systems to use other software (modules, + Guix, Spack, ...). - Computations are launched through a schedulers (like Slurm, OAR, ...) with a launching script. In the Fluiddyn project, we tend to avoid writting manually the launching scripts (which is IMHO error prone and slow) and prefer to use the `fluiddyn.clusters` API, which allows users to launch simulations with simple Python scripts. -We present here few examples of installations on different kinds of clusters: +We present here few examples of installation methods and launching scripts on different +kinds of clusters: ```{toctree} --- caption: Examples maxdepth: 1 --- +./examples/clusters/adastra/README.md ./examples/clusters/gricad/README.md ``` diff --git a/doc/install.md b/doc/install.md index c25fcf1e..2bbae2d2 100644 --- a/doc/install.md +++ b/doc/install.md @@ -160,7 +160,7 @@ If the system has multiple MPI libraries, it is advised to explicitly mention th MPI command. For instance to use Intel MPI: ```sh -CC=mpiicc pip install mpi4py --no-binary mpi4py +CC=mpicc pip install mpi4py --no-binary mpi4py --force-reinstall --no-cache-dir --no-deps -v ``` ```` @@ -176,7 +176,8 @@ time, this is what you want. However, you can install h5py from source and link to a hdf5 built with MPI support, as follows: ```bash -CC="mpicc" HDF5_MPI="ON" HDF5_DIR=/path/to/parallel-hdf5 pip install --no-deps --no-binary=h5py h5py +CC="mpicc" HDF5_MPI="ON" HDF5_DIR=/path/to/parallel-hdf5 \ + pip install h5py --no-binary=h5py --force-reinstall --no-cache-dir --no-deps -v python -c 'import h5py; h5py.run_tests()' ```