Skip to content

Commit

Permalink
Merge pull request #726 from yandthj/add_hbw_partition
Browse files Browse the repository at this point in the history
Add hbw partition info
  • Loading branch information
yandthj authored Jan 14, 2025
2 parents 141b35e + 4862d56 commit 8b61ba0
Showing 1 changed file with 16 additions and 1 deletion.
17 changes: 16 additions & 1 deletion docs/Documentation/Systems/Kestrel/Running/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ There are two general types of compute nodes on Kestrel: CPU nodes and GPU nodes


### CPU Nodes
Standard CPU-based compute nodes on Kestrel have 104 cores and 240G of usable RAM. 256 of those nodes have a 1.7TB NVMe local disk. There are also 10 bigmem nodes with 2TB of RAM and 5.6TB NVMe local disk.
Standard CPU-based compute nodes on Kestrel have 104 cores and 240G of usable RAM. 256 of those nodes have a 1.7TB NVMe local disk. There are also 10 bigmem nodes with 2TB of RAM and 5.6TB NVMe local disk. Two racks of the CPU compute nodes have dual network interface cards (NICs) which may increase performance for certain types of multi-node jobs.


### GPU Nodes
Expand Down Expand Up @@ -45,6 +45,7 @@ The following table summarizes the partitions on Kestrel:
| ```long``` | Nodes that prefer jobs with walltimes > 2 days.<br>*Maximum walltime of any job is 10 days*| 525 nodes total.<br> 262 nodes per user.| ```--time <= 10-00```<br>```--mem <= 246064```<br>```--tmp <= 1700000 (256 nodes)```|
|```bigmem``` | Nodes that have 2 TB of RAM and 5.6 TB NVMe local disk. | 8 nodes total.<br> 4 nodes per user. | ```--mem > 246064```<br> ```--time <= 2-00```<br>```--tmp > 1700000 ``` |
|```bigmeml``` | Bigmem nodes that prefer jobs with walltimes > 2 days.<br>*Maximum walltime of any job is 10 days.* | 4 nodes total.<br> 3 nodes per user. | ```--mem > 246064```<br>```--time > 2-00```<br>```--tmp > 1700000 ``` |
|```hbw``` | CPU compute nodes with dual network interface cards. | 512 nodes total.<br> 256 nodes per user. <br> Minimum 2 nodes per job. | ```-p hbw``` <br>```--time <= 10-00``` <br> ```--nodes >= 2```|
| ```shared```| Nodes that can be shared by multiple users and jobs. | 64 nodes total. <br> Half of partition per user. <br> 2 days max walltime. | ```-p shared``` <br> or<br> ```--partition=shared```|
| ```sharedl```| Nodes that can be shared by multiple users and prefer jobs with walltimes > 2 days. | 16 nodes total. <br> 8 nodes per user. | ```-p sharedl``` <br> or<br> <nobr>```--partition=sharedl```</nobr>|
| ```gpu-h100```| Shareable GPU nodes with 4 NVIDIA H100 SXM 80GB Computational Accelerators. | 130 nodes total. <br> 65 nodes per user. | ```1 <= --gpus <= 4``` <br> ```--time <= 2-00```|
Expand Down Expand Up @@ -81,6 +82,20 @@ Currently, there are 64 standard compute nodes available in the shared partition
srun ./my_progam # Use your application's commands here
```

### High Bandwidth Partition

In December 2024, Kestrel had two racks of CPU nodes reconfigured with an extra network interface card, which can greatly benefit communication-bound HPC software.
A NIC is a hardware component that enables inter-node (i.e., *network*) communication as multi-node jobs run.
On Kestrel, most CPU nodes include a single NIC. Although having one NIC per node is acceptable for the majority of workflows run on Kestrel, it can lead to communication congestion
when running multi-node applications that send significant amounts of data over Kestrel's network. When this issue is encountered, increasing the number of available NICs
can alleviate such congestion during runtime. Some common examples of communication-bound HPC software are AMRWind and LAMMPS.

To request nodes with two NICs, specify `--partition=hbw` in your job submissions. Because the purpose of the high bandwidth nodes is to optimize communication in multi-node jobs, it is not permitted to submit single-node jobs to the `hbw` partition.
If you would like assistance with determining whether your workflow could benefit from running in the `hbw` partition, please reach out to [HPC-Help@nrel.gov](mailto:HPC-Help).

!!! info
We'll be continuing to update documentation with use cases and recommendations for the dual NIC nodes, including specific examples on the LAMMPS and AMRWind pages.


### GPU Jobs

Expand Down

0 comments on commit 8b61ba0

Please sign in to comment.