-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Mesh class #48
Add Mesh class #48
Conversation
I still need to add documentation, but have opened a PR since the code portion is nearly complete. |
Array2DI4 CellsOnCell; ///< Indx of cells that neighbor each cell | ||
ArrayHost2DI4 CellsOnCellH; ///< Indx of cells that neighbor each cell |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This pattern of declaring two arrays (host and device) per mesh variable will double the amount of memory needed to store the mesh when the device is a CPU. Additionally, redundant copies between the two arrays will be performed. It is probably fine for now, but long-term we might need a different solution.
f0503a5
to
5ed7ba3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still reviewing the code, but with the needed change to the parmetis location variable in the CMake configuration, I built and successfully ran the test on Chrysalis and Frontier.
Just noting that for implementing differential operators it would be good for the |
thanks @mwarusz, yes I realized that an am adding computations. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks mostly good. A few questions/comments and the specific change requested in a separate comment.
Any reason not to read all the quantities at once with a single read routine? That way we don't need to retain the IO decompositions and other IO stuff in the class.
It would be good to flesh out a little more detail in the DevGuide how you would actually use this class throughout the code.
Given that the instance will be created in an Omega init routine, how would you access this by the run method? We won't be able to pass the instance up through the coupler. This is one reason the other classes have been storing all instances for later retrieval. Another option is to have some big omega header with extern static instance for stuff that needs to be saved from one phase to another.
@philipwjones, thanks for the comments. I'll make edits to have a single read routine and instance storage/retrieval. |
aa61e7f
to
7a010ff
Compare
- Change to All vs. Owned totals
- Move IO init to method
- Add comments - Fix some long lines - Add lon/lat test for edge/vertex coordinates
@sbrus89 Thanks for the explanation - I guess we can leave the multiple reads for now and consolidate later once we've determined what we're reading vs computing. |
Thanks again for your comments @philipwjones, @brian-oneill, and @mwarusz. I think I have addressed the the changes suggested so far. Please let me know if anything else comes to mind. |
I realized some build issues, so I'm sorting those out. |
@sbrus89 Just a couple (hopefully final) things. First, Omega is following the MPAS convention of adding boundary conditions in the NCellsAll+1 location so all arrays should be dimensioned NCellsSize (and NEdgesSize, etc) to accommodate the extra entry since the connectivity arrays will point to that location for boundary Cells/Edges/Vertices. Second, can you verify that the halo cells are filled correctly? Since the CellID, etc arrays do have the IDs in the halo regions, the parallel IO may be filling them correctly, but I have not checked that - you could do a simple before/after Halo update comparison to see. Built the docs and they look fine and verified unit tests pass, but will recheck after the above changes. |
@philipwjones, thanks for noticing that. I fixed the array sizes and added a test that verifies the read-in values for the halos based on CellID etc. are correct by checking them against a full halo exchange. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the changes @sbrus89 . This looks fine to me now and still passes unit tests.
I rebased locally on the test instructions:
error:
error:
|
Looks good to me, builds and unit test successfully passes on Chrysalis and Frontier. |
@mark-petersen The error you're getting on Frontier is because IOTest is trying to write a test output file and doesn't have permission. You'll have to run this from the /lustre/orion/scratch spaces on Frontier (eg ${MEMBERWORK}/cli115 ) The compute nodes don't have write access to home directory spaces. The memory high water mark is just a diagnostic and wasn't the actual error. |
Thanks @philipwjones and @grnydawn for the help. I made two mistakes. On perlmutter and frontier, I have to run tests on the scratch file system for PIO. On perlmutter, I had to instructions (change to your paths)
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving based on testing and review by Phil and Bryan. Thanks!
@mwarusz, let me know if you have any other comments. Otherwise I'll plan on merging this soon. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have one small comment regarding the dev docs, but this should be an easy change and everything else looks good so I am approving it.
This PR introduces the Mesh class for the local mesh variables on an MPI rank. It includes unit tests that check mesh variables against various metrics to ensure the values have been read correctly.
Checklist
after.