diff --git a/docs/Scanner/explanations/segmentation.md b/docs/Scanner/explanations/segmentation.md
index ee844298..e5e7df86 100644
--- a/docs/Scanner/explanations/segmentation.md
+++ b/docs/Scanner/explanations/segmentation.md
@@ -1,7 +1,7 @@
Segmentation of images
===
-The segmentation of an image consists in assigning a label to each of its pixels. For the 3d reconstruction of a plant, we need at least the segmentation of the images into 2 classes: *plant* and *backround*. For a reconstruction with semantic labeling of the point cloud, we will need a semantic segmentation of the images giving one label for each organ type (e.g. {*leaf*, *stem*,*pedicel*, *flower*, *fruit*}). Figures shows describe the binary and multi-class segmentations for a virtual plant.
+The segmentation of an image consists in assigning a label to each of its pixels. For the 3d reconstruction of a plant, we need at least the segmentation of the images into 2 classes: *plant* and *backround*. For a reconstruction with semantic labeling of the point cloud, we will need a semantic segmentation of the images giving one label for each organ type (e.g. {*leaf*, *stem*,*pedicel*, *flower*, *fruit*}). The figure below shows the binary and multi-class segmentations for a virtual plant.
-The architecture of the network is inspired from the U-net [ref], with a ResNet encoder [ref]. It constists in encoding and decoding pathways with skip connections between the 2.
+The architecture of the network is inspired from the U-net [^1], with a ResNet encoder [^2]. It constists in encoding and decoding pathways with skip connections between the 2. Along the encoding pathways, there is a sequence of convolutions and the image signal is upsampled along the decoding pathway.
+
+The network is trained for segmenting images of a size $(S_x,S_y)$ which is not necessarily the image size of the acquired images. Those parameters *Sx* and *Sy* should be provided in the configuration file. The images will be cropped to $(S_x,S_y)$ before being fed to the DNN and it is then resized to the original size as an output of the task.
+
+[^1]: Ronneberger, O., Fischer, P., & Brox, T. (2015, October). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention (pp. 234-241). Springer, Cham.
+
+[^2]: Zhang, Z., Liu, Q., & Wang, Y. (2018). Road extraction by deep residual u-net. IEEE Geoscience and Remote Sensing Letters, 15(5), 749-753.
####Configuration File
```toml
[Segmentation2D]
-upstream_task = "ImageFilesetExists" #Alternatively Undistorted
-model_fileset = "ModelFileset"
-model_id = "Resnet_896_896_epoch50" # no default value
-query = "{\"channel\":\"rgb\"}" # default is an empty dict '{}'
+model_id = "Resnetdataset_gl_png_896_896_epoch50" # no default value
Sx = 896
Sy = 896
-labels = "[]" # default is empty list to use all trained labels from model
-inverted_labels = "[\"background\"]"
threshold = 0.01
```
-
### DNN model
The neural architecture weights are obtained through training on an annotated dataset (see How to train a DNN for semantic segmentation). Those weights should be stored in the database (at `/models/models`) and the name of the weights file should be provided as the *model_id* parameter in the configuration. You can use our model trained on virtual arabidopsis [here](https://media.romi-project.eu/data/Resnetdataset_gl_png_896_896_epoch50.pt)
@@ -144,10 +133,22 @@ A binary mask $m$ is produced from the index or from the output of the DNN, *I*,
\end{cases}
\end{equation}
+This threshold may be chosen empirically or it may be learnt from annotated data (see linear SVM section).
## Dilation
-If the integer *dilation* parameter is non-zero a morphological dilation is apllied to the image using the function [*binary_dilation*](https://scikit-image.org/docs/dev/api/skimage.morphology.html#skimage.morphology.binary_dilation) from the *skimage.morphology* module.
+If the integer *dilation* parameter is non-zero a morphological dilation is apllied to the image using the function [*binary_dilation*](https://scikit-image.org/docs/dev/api/skimage.morphology.html#skimage.morphology.binary_dilation) from the *skimage.morphology* module.
The *dilation* parameter sets the number of times *binary_dilation* is iteratively applied. For a faithful reconstruction this parameter should be set to $0$ but in practice you may want to have a coarser point cloud. This is true when your segmentation is not perfect, dilation will fill the holes or when the reconstructed mesh is broken because the pĂ´int cloud is too thin.
+## Working with data from the virtual scanner
+
+When working with data generated with the virtual scanner, the *images* folder contains multiple channels corresponding to the various class for which images were generated (*stem*, *flower*, *fruit*, *leaf*, *pedicel*). You have to select the *rgb* channel using the *query* parameter.
+
+####Configuration File
+```toml
+[Masks]
+type = "excess_green"
+threshold = 0.2
+query = "{\"channel\":\"rgb\"}"
+```