You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
List of useful feature, from most to least urgent.
Proper handling of multiple plasma species. So far, every species deposits chi as an electron, so with 2 species the results are wrong, which is a strong limitation.
Accelerate H2D & D2H copies when laser.3d_on_host = 1.
Improve parallelization when laser.3d_on_host = 1. Currently, in Wait and Notify, I suspect that the ParallelFor copies the data to device memory, although the copy should be from/to a host array (the MPI buffer) to/from a host array (the 3D laser envelope): done in Avoid spurious copies to/from GPU in laser MPI communication pattern #807.
Line cudaGraphExecDestroy(m_cuda_graph_exe_vcycle[i]); in MultiGrid::~MultiGrid causes an error when running the MG solver in Debug mode on nvidia GPU. This should be fixed, although the simulation runs fine until finalise: done in MultiGrid Solver: More generic type of coefficients #808.
Option to not evolve the laser phase --> Maxence
Properly handle Mult and Chi.
Implement AMD.
In case this happens: make sure we don't exceed MAX_INT.
Initialization of a laser profile via the parser (non-Gaussian).
the possibility to load and restart the laser.
the possibility to propagate the laser backwards.
Communicate guard cells? This may account for small differences observed between serial and parallel (1.e-9).
Fix Array3/Array4 thing. More details needed.
Separate resolution and bounds for the laser array, as well as multiple lasers --> Maxence.
Compatible with adaptive time step.
The text was updated successfully, but these errors were encountered:
List of useful feature, from most to least urgent.
laser.3d_on_host = 1
.laser.3d_on_host = 1
. Currently, inWait
andNotify
, I suspect that theParallelFor
copies the data to device memory, although the copy should be from/to a host array (the MPI buffer) to/from a host array (the 3D laser envelope): done in Avoid spurious copies to/from GPU in laser MPI communication pattern #807.cudaGraphExecDestroy(m_cuda_graph_exe_vcycle[i]);
inMultiGrid::~MultiGrid
causes an error when running the MG solver in Debug mode on nvidia GPU. This should be fixed, although the simulation runs fine until finalise: done in MultiGrid Solver: More generic type of coefficients #808.The text was updated successfully, but these errors were encountered: