diff --git a/docs/data-generation.md b/docs/data-generation.md index c93e2ef..63a9fd8 100644 --- a/docs/data-generation.md +++ b/docs/data-generation.md @@ -48,6 +48,6 @@ Following the structure given in the [general data generation](ML_training.md) c With the previous step having extracted fine-grained data for each time step (and each trajectory for which it was repeated), we now need to run a single-timestep coarse-grained simulation. To do this, see [files/coarse_simulations](../files/coarse_simulations/). Submitting [run_coarse_sims.sh](../files/coarse_simulations/run_coarse_sims.sh) will run a single step simulation for each coarsened timestep created in the previous step. -5. Calculate the correction. +Subsequent steps: calculating the error; reformatting data for ingestion into TensorFlow; and model training are covered in [ML model training implementation](training_implementation.md). + - diff --git a/docs/workflow.md b/docs/workflow.md index 3c2ec21..a4fd4be 100644 --- a/docs/workflow.md +++ b/docs/workflow.md @@ -5,8 +5,7 @@ The system needs to have all the tools and packages (in suitable versions) insta The example workflow described here does not require a pre-trained ML model, we are using a placeholder model that alwyas returns zeroes to showcase the framework, and the script is provided here. Obviously, any other model can be exported in the desired format and used in the workflow. [< Back](./) - -## Export the ML model + ## Compile Hasegawa Wakatani with SmartRedis