Skip to content

Commit

Permalink
Merge pull request #662 from yandthj/gh-pages
Browse files Browse the repository at this point in the history
kestrel release note updats
  • Loading branch information
yandthj authored Aug 14, 2024
2 parents 865b1a7 + 83de225 commit 97d5166
Show file tree
Hide file tree
Showing 2 changed files with 28 additions and 4 deletions.
30 changes: 27 additions & 3 deletions docs/Documentation/Systems/Kestrel/kestrel_release_notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,31 @@

*We will update this page with Kestrel release notes after major Kestrel upgrades.*

## July 29 - July 30
## August 14, 2024

Jobs running on `debug` GPU nodes are now limited to half a GPU node's resources across one or two nodes. This is equivalent to 64 CPUs, 2 GPUs, and 180G of RAM on one node. `--exclusive` can no longer be used for GPU debug jobs.

## August 9, 2024

As of 08/09/2024 we have released new modules for VASP on Kestrel CPUs:

```
------------ /nopt/nrel/apps/cpu_stack/modules/default/application -------------
#new modules:
vasp/5.4.4+tpc vasp/6.3.2_openMP+tpc vasp/6.4.2_openMP+tpc
vasp/5.4.4_base vasp/6.3.2_openMP vasp/6.4.2_openMP
#legacy modules will be removed during next system time:
vasp/5.4.4 vasp/6.3.2 vasp/6.4.2 (D)
```

What’s new:

* New modules have been rebuilt with the latest Cray Programming Environment (cpe23), updated compilers, and math libraries.
* OpenMP capability has been added to VASP 6 builds.
* Modules that include third-party codes (e.g., libXC, libBEEF, VTST tools, and VASPsol) are now denoted with +tpc. Use `module show vasp/<version>` to see details of a specific version.

## July 29 - July 30, 2024

1. Two [GPU login nodes](../Kestrel/index.md) were added. Use the GPU login nodes for compiling software to run on GPU nodes and for submitting GPU jobs.
1. GPU compute nodes were made available for general use and additional GPU partitions were added. See [Running on Kestrel](../Kestrel/running.md) for additional information and recommendations.
Expand Down Expand Up @@ -51,7 +75,7 @@ Intel-oneapi-compilers.
* The 2024 version is now added.


## April 12 - April 17
## April 12 - April 17, 2024

1. The size of the [shared node partition](./running.md#shared-node-partition) was doubled from 32 nodes to 64 nodes.

Expand All @@ -62,7 +86,7 @@ Intel-oneapi-compilers.
4. `/kfs2/pdatasets` was renamed to `/kfs2/datasets` and a symlink `/datasets` was added.


## Jan. 29 - Feb. 14 Upgrades
## Jan. 29 - Feb. 14, 2024 Upgrades

1. We have experienced that most previously built software runs without modification (this includes NREL provided modules) and performs at the same level.

Expand Down
2 changes: 1 addition & 1 deletion docs/Documentation/Systems/Kestrel/running.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ The following table summarizes the partitions on Kestrel:

| Partition Name | Description | Limits | Placement Condition |
| -------------- | ------------- | ------ | ------------------- |
| ```debug``` | Nodes dedicated to developing and <br> troubleshooting jobs. Debug nodes with each of the non-standard <br> hardware configurations are available. <br> The node-type distribution is: <br> - 2 bigmem nodes <br> - 2 nodes with 1.7 TB NVMe <br> - 4 standard nodes <br> - 2 GPU nodes (shared) <br> **10 total nodes** | 1 job with a max of 2 nodes per user. <br> 2 GPUs per user. <br> 01:00:00 max walltime. | ```-p debug``` <br> or<br> ```--partition=debug``` |
| ```debug``` | Nodes dedicated to developing and <br> troubleshooting jobs. Debug nodes with each of the non-standard <br> hardware configurations are available. <br> The node-type distribution is: <br> - 2 bigmem nodes <br> - 2 nodes with 1.7 TB NVMe <br> - 4 standard nodes <br> - 2 GPU nodes (shared) <br> **10 total nodes** | - 1 job with a max of 2 nodes per user. <br> - 2 GPUs per user.<br> - 1/2 GPU node resources per user (Across 1-2 nodes). <br> - 01:00:00 max walltime. | ```-p debug``` <br> or<br> ```--partition=debug``` |
|```short``` | Nodes that prefer jobs with walltimes <br> <= 4 hours. | 2016 nodes total. <br> No limit per user. | ```--time <= 4:00:00```<br>```--mem <= 246064```<br> ```--tmp <= 1700000 (256 nodes)```|
| ```standard``` | Nodes that prefer jobs with walltimes <br> <= 2 days. | 2106 nodes total. <br> 1050 nodes per user. | ```--mem <= 246064```<br> ```--tmp <= 1700000```|
| ```long``` | Nodes that prefer jobs with walltimes > 2 days.<br>*Maximum walltime of any job is 10 days*| 525 nodes total.<br> 262 nodes per user.| ```--time <= 10-00```<br>```--mem <= 246064```<br>```--tmp <= 1700000 (256 nodes)```|
Expand Down

0 comments on commit 97d5166

Please sign in to comment.