Optax 0.0.5
Changelog
Note: this is a first GitHub release of Optax. It includes all changes since the repo was created.
Implemented enhancements:
- Implement lookahead optimiser #17
- Implement support for Yogi optimiser #9
- Implement rectified Adam #8
- Implement gradient centralisation #7
- Implement scaling by AdaBelief #6
Closed issues:
- Multiple optimizers using optax #59
- Change masked wrapper to use mask_fn instead of mask #57
- Prevent creating unnecessary momentum variables #52
- Implement Differentially Private Stochastic Gradient Descent #50
- RMSProp does not match original Tensorflow impl #49
- JITted Adam results in NaN when setting decay to integer 0 #46
- Option to not decay bias with additive_weight_decay #25
- Support specifying end_value for exponential_decay #21
- Schedules for Non-Learning Rate Hyper-parameters #20
- Implement OneCycle Learning Rate Schedule #19
- adam does not learn? #18
- Which JAX-based libraries is optax compatible with? #14
- Manually setting the learning_rate? #4
Merged pull requests:
- Fix pylint errors. #73 (copybara-service[bot])
- Add PyPI release workflow and increment the version. #70 (copybara-service[bot])
- Add flax to requirements for tests. #69 (copybara-service[bot])
- Add first flax equivalence test. #68 (copybara-service[bot])
- Targets optional in l2loss and huberloss. #67 (copybara-service[bot])
- Add .pylintrc and run pylint checks in CI workflow. #66 (copybara-service[bot])
- Increase optax version #63 (copybara-service[bot])
- Add utilities for eigenvector and matrix inverse pth root computation. #62 (copybara-service[bot])
- Add Callable option to optax.masked. #60 (n2cholas)
- Increase optax version for PyPi release. #58 (copybara-service[bot])
- Add momentum and initial_scale to RMSProp #55 (rwightman)
- Prevent creating unnecessary momentum variables. #54 (n2cholas)
- Implement DPSGD #53 (n2cholas)
- Add inject_hyperparams wrapper #48 (n2cholas)
- Format tests and parallelize pytest runs. #47 (copybara-service[bot])
- Provide a canonical implementation of canonical losses used in gradient based optimisation. #45 (copybara-service[bot])
- Expose optax transform's init and update function signatures to facilitate type annotation in user code. #44 (copybara-service[bot])
- Add a transformation and a transformation wrapper. #43 (copybara-service[bot])
- Update reference arxiv link. #41 (copybara-service[bot])
- Move equivalence tests to a separate file, as we will be adding more. #40 (copybara-service[bot])
- Optax: Add MNIST example with Adam optimizer and lookahead wrapper. #39 (copybara-service[bot])
- Optax: gradient transformation for non-negative parameters. #38 (copybara-service[bot])
- Aliases support LR schedules in addition to constant scalar LRs. #37 (copybara-service[bot])
- Optax: add datasets module for image classifier example. #36 (copybara-service[bot])
- Ensure the number of update functions and states is the same in chain. #34 (copybara-service[bot])
- Rename
additive_weight_decay
toadd_decayed_weights
. #33 (copybara-service[bot]) - Remove
scale_by_fromage
. #32 (copybara-service[bot]) - Add AGC to optax __init__ and add comment noting regarding 1D conv weights. #30 (copybara-service[bot])
- Clean nits to make loss and hk.transform() slightly more clear. #29 (copybara-service[bot])
- Disable macos-latest tests (to speed up CI) and add CI status badge. #28 (copybara-service[bot])
- Add a mask wrapper. #27 (n2cholas)
- Support end_value for exponential_decay #26 (n2cholas)
- Add piecewise_interpolate_schedule, linear_onecycle, and cos_onecycle. #22 (n2cholas)
- Yogi #16 (joaogui1)
- Radam #15 (joaogui1)
- gradient centralization #13 (joaogui1)
- Fix haiku_example.py #5 (asmith26)
* This Changelog was automatically generated by github_changelog_generator