Skip to content

Commit

Permalink
updates changelog v0.5.3 (#2271)
Browse files Browse the repository at this point in the history
* release note v0.5.3

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

* update changelog

Signed-off-by: Wenqi Li <wenqil@nvidia.com>
  • Loading branch information
wyli authored Jun 1, 2021
1 parent 385ce70 commit d78c669
Show file tree
Hide file tree
Showing 3 changed files with 41 additions and 5 deletions.
38 changes: 37 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,42 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).

## [Unreleased]
## [0.5.0] - 2020-04-09
## [0.5.3] - 2021-05-28
### Changed
* Project default branch renamed to `dev` from `master`
* Base Docker image upgraded to `nvcr.io/nvidia/pytorch:21.02-py3` from `nvcr.io/nvidia/pytorch:21.04-py3`
* Enhanced type checks for the `iteration_metric` handler
* Enhanced `PersistentDataset` to use `tempfile` during caching computation
* Enhanced various info/error messages
* Enhanced performance of `RandAffine`
* Enhanced performance of `SmartCacheDataset`
* Optionally requires `cucim` when the platform is `Linux`
* Default `device` of `TestTimeAugmentation` changed to `cpu`

### Fixed
* Download utilities now provide better default parameters
* Duplicated `key_transforms` in the patch-based transforms
* A multi-GPU issue in `ClassificationSaver`
* A default `meta_data` issue in `SpacingD`
* Dataset caching issue with the persistent data loader workers
* A memory issue in `permutohedral_cuda`
* Dictionary key issue in `CopyItemsd`
* `box_start` and `box_end` parameters for deepgrow `SpatialCropForegroundd`
* Tissue mask array transpose issue in `MaskedInferenceWSIDataset`
* Various type hint errors
* Various docstring typos

### Added
* Support of `to_tensor` and `device` arguments for `TransformInverter`
* Slicing options with SpatialCrop
* Class name alias for the networks for backward compatibility
* `k_divisible` option for CropForeground
* `map_items` option for `Compose`
* Warnings of `inf` and `nan` for surface distance computation
* A `print_log` flag to the image savers
* Basic testing pipelines for Python 3.9

## [0.5.0] - 2021-04-09
### Added
* Overview document for [feature highlights in v0.5.0](https://github.com/Project-MONAI/MONAI/blob/master/docs/source/highlights.md)
* Invertible spatial transforms
Expand Down Expand Up @@ -255,6 +290,7 @@ the postprocessing steps should be used before calling the metrics methods
[highlights]: https://github.com/Project-MONAI/MONAI/blob/master/docs/source/highlights.md

[Unreleased]: https://github.com/Project-MONAI/MONAI/compare/0.5.0...HEAD
[0.5.3]: https://github.com/Project-MONAI/MONAI/compare/0.5.0...0.5.3
[0.5.0]: https://github.com/Project-MONAI/MONAI/compare/0.4.0...0.5.0
[0.4.0]: https://github.com/Project-MONAI/MONAI/compare/0.3.0...0.4.0
[0.3.0]: https://github.com/Project-MONAI/MONAI/compare/0.2.0...0.3.0
Expand Down
2 changes: 1 addition & 1 deletion docs/source/highlights.md
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ To convert images into files or debug the transform chain, MONAI provides `SaveI
Medical images have different shape formats. They can be `channel-last`, `channel-first` or even `no-channel`. We may, for example, want to load several `no-channel` images and stack them as `channel-first` data. To improve the user experience, MONAI provided an `EnsureChannelFirst` transform to automatically detect data shape according to the meta information and convert it to the `channel-first` format consistently.

### 13. Invert spatial transforms and test-time augmentations
It is often desirable to invert the previously applied spatial transforms (resize, flip, rotate, zoom, crop, pad, etc.) with the deep learning workflows, for example, to resume to the original imaging space after processing the image data in a normalized data space. We enhance almost all the spatial transforms with an `inverse` operation and release this experimental feature in v0.5.0. Users can easily invert all the spatial transforms for one transformed data item or a batch of data items. It also can be achieved within the workflows by using the `TransformInverter` handler.
It is often desirable to invert the previously applied spatial transforms (resize, flip, rotate, zoom, crop, pad, etc.) with the deep learning workflows, for example, to resume to the original imaging space after processing the image data in a normalized data space. We enhance almost all the spatial transforms with an `inverse` operation and release this experimental feature in v0.5. Users can easily invert all the spatial transforms for one transformed data item or a batch of data items. It also can be achieved within the workflows by using the `TransformInverter` handler.

If the pipeline includes random transformations, users may want to observe the effect that these transformations have on the output. The typical approach is that we pass the same input through the transforms multiple times with different random realizations. Then use the inverse transforms to move all the results to a common space, and calculate the metrics. MONAI provided `TestTimeAugmentation` for this feature, which by default will calculate the `mode`, `mean`, `standard deviation` and `volume variation coefficient`.

Expand Down
6 changes: 3 additions & 3 deletions docs/source/whatsnew.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# What's new in 0.5.0 🎉
# What's new in 0.5 🎉

## Invert spatial transforms and test-time augmentations
It is often desirable to invert the previously applied spatial transforms (resize, flip, rotate, zoom, crop, pad, etc.) with the deep learning workflows, for example, to resume to the original imaging space after processing the image data in a normalized data space. We enhance almost all the spatial transforms with an `inverse` operation and release this experimental feature in v0.5.0. Users can easily invert all the spatial transforms for one transformed data item or a batch of data items. It also can be achieved within the workflows by using the `TransformInverter` handler.
It is often desirable to invert the previously applied spatial transforms (resize, flip, rotate, zoom, crop, pad, etc.) with the deep learning workflows, for example, to resume to the original imaging space after processing the image data in a normalized data space. We enhance almost all the spatial transforms with an `inverse` operation and release this experimental feature in v0.5. Users can easily invert all the spatial transforms for one transformed data item or a batch of data items. It also can be achieved within the workflows by using the `TransformInverter` handler.

If the pipeline includes random transformations, users may want to observe the effect that these transformations have on the output. The typical approach is that we pass the same input through the transforms multiple times with different random realizations. Then use the inverse transforms to move all the results to a common space, and calculate the metrics. MONAI provided `TestTimeAugmentation` for this feature, which by default will calculate the `mode`, `mean`, `standard deviation` and `volume variation coefficient`.

Expand Down Expand Up @@ -33,7 +33,7 @@ An end-to-end example is presented at [`project-monai/tutorials`](https://github
![deepgrow end-to-end](../images/deepgrow.png)

## Learning-based image registration
Starting from v0.5.0, MONAI provides experimental features for building learning-based 2D/3D registration workflows. These include image similarity measures as loss functions, bending energy as model regularization, network architectures, warping modules. The components can be used to build the major unsupervised and weakly-supervised algorithms.
Starting from v0.5, MONAI provides experimental features for building learning-based 2D/3D registration workflows. These include image similarity measures as loss functions, bending energy as model regularization, network architectures, warping modules. The components can be used to build the major unsupervised and weakly-supervised algorithms.

The following figure shows the registration of CT images acquired at different time points for a single patient using MONAI:

Expand Down

0 comments on commit d78c669

Please sign in to comment.