You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Instead of allocating an array the size of $N_{grid} \cdot \frac{\Delta_t}{\tau}$ and downsampling the timeseries at the end, a smaller array of size $N_{grid} \cdot \frac{\Delta_o}{\Delta_t}$ could be allocated. During the simulation, the results in this array would be averaged into the final output array and then the transient array could be reused. This would require some work to figure out how to handle puffs that overlap two or more averaging windows.
Since many timesteps are zero, the results could be built using sparse arrays. The threshold for this to save memory is for at least 66% of the array to be zero, which I think is the case but should be verified. This is more straightforward then option 1 but seemingly with a smaller memory savings.
The text was updated successfully, but these errors were encountered:
The current implementation is unnecessarily memory-hungry, especially the grid mode. There are two main ways I've thought of to fix this.
Notation:$\Delta_t$ = sim_dt, $\Delta_o$ = output_dt, $\tau$ = simulation duration.
The text was updated successfully, but these errors were encountered: