Skip to content

Commit

Permalink
Update gpu-nodes.md
Browse files Browse the repository at this point in the history
Mention of the requirement to request gpu partition to use those nodes was so far missing from the docs
  • Loading branch information
Nicolai-vKuegelgen authored Nov 19, 2024
1 parent f42cb85 commit ee23518
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion bih-cluster/docs/how-to/connect/gpu-nodes.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
The cluster has seven nodes with four Tesla V100 GPUs each: `hpc-gpu-{1..7}` and one node with 10 A40 GPUs: `hpc-gpu-8`.

Connecting to a node with GPUs is easy.
You request one or more GPU cores by adding a generic resources flag to your Slurm job submission via `srun` or `sbatch`.
You request one or more GPU cores by adding a generic resources flag to your Slurm job submission via `srun` or `sbatch` in addition to the `--partition gpu` flag:

- `--gres=gpu:tesla:COUNT` will request NVIDIA V100 cores.
- `--gres=gpu:a40:COUNT` will request NVIDIA A40 cores.
Expand Down

0 comments on commit ee23518

Please sign in to comment.