Skip to content

Latest commit

 

History

History
52 lines (41 loc) · 3.67 KB

index.md

File metadata and controls

52 lines (41 loc) · 3.67 KB

Multi-Template Matching

Multi-Template-Matching is an accessible method to perform object-detection in images using one or several template images for the search.
The strength of the method compared to previously available single-template matching, is that by combining the detections from multiple templates, one can improve the range of detectable patterns. This helps if you expect variability of the object-perspective in your images, such as rotation, flipping...
The detections from the different templates are not simply combined, they are filtered using Non-Maxima Supression (NMS) to prevent overlapping detections.

Implementations

We currently have implemented Multi-Template-Matching (MTM) in:

  • Fiji
    Activate the IJ-OpenCV and Multi-Template Matching update site.

  • Original Python implementation relying on OpenCV matchTemplate
    pip install Multi-Template-Matching (case sensitive and mind the - )

  • python-oop : a more object-oriented programming version, with a cleaner syntax. This one relies on scikit-image and shapely (for the BoudingBox)
    Maybe a bit slower but more interoperable and easier to extend.

  • KNIME (relying on the python implementation)
    This repository also contains the workflow for classification using multiple templates.
    Download the workflow from the KNIME Hub.

Documentation

Refer to the wiki sections of the respective GitHub repository for the implementation-specific documentation.
In particular, the Fiji and KNIME implementation have dedicated youtube tutorials, while the python implementation comes with example notebooks that can be executed in a browser.
Below some generic documentation pages:

Additional resources:

Citation

If you use these implementations, please cite:

Thomas, L.S.V., Gehrig, J.
Multi-template matching: a versatile tool for object-localization in microscopy images
BMC Bioinformatics 21, 44 (2020). https://doi.org/10.1186/s12859-020-3363-7
Download the citation as a ris file.

Funding

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 721537 “ImageInLife”.