Skip to content

Commit

Permalink
Added Keras citation
Browse files Browse the repository at this point in the history
  • Loading branch information
andreped authored Jan 21, 2024
1 parent 6cf593b commit c1f134d
Show file tree
Hide file tree
Showing 2 changed files with 9 additions and 1 deletion.
8 changes: 8 additions & 0 deletions paper/paper.bib
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,14 @@ @misc{tensorflow2015abadi
year={2015},
}

@online{chollet2015keras,
title={Keras},
author={Chollet, Francois and others},
year={2015},
publisher={GitHub},
url={https://github.com/fchollet/keras},
}

@software{falcon2023lightning,
author = {Falcon, William and others},
title = {{PyTorch Lightning}},
Expand Down
2 changes: 1 addition & 1 deletion paper/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ GradientAccumulator has already been used in several research studies [@pedersen

# Implementation

`GradientAccumulator` implements two main approaches to add gradient accumulation support to an existing TensorFlow model. GA support can either be added through model or optimizer wrapping. By wrapping the model, the `train_step` of a given Keras model is updated such that the gradients are updated only after a user-defined number of backward steps. Wrapping the optimizer works somewhat similar, but this update control is handled directly in the optimizer itself. This is done in such a way that _any_ optimizer can be used with this approach.
`GradientAccumulator` implements two main approaches to add gradient accumulation support to an existing TensorFlow model. GA support can either be added through model or optimizer wrapping. By wrapping the model, the `train_step` of a given Keras [@chollet2015keras] model is updated such that the gradients are updated only after a user-defined number of backward steps. Wrapping the optimizer works somewhat similar, but this update control is handled directly in the optimizer itself. This is done in such a way that _any_ optimizer can be used with this approach.


More details and tutorials on getting started with the `GradientAccumulator` package, can be found in the `GradientAccumulator `\href{(https://gradientaccumulator.readthedocs.io/}{documentation}.
Expand Down

0 comments on commit c1f134d

Please sign in to comment.