Skip to content

Latest commit

 

History

History
220 lines (163 loc) · 8.17 KB

README_en.md

File metadata and controls

220 lines (163 loc) · 8.17 KB

中文 | English

fracture-Detection

Detection fresh and old fracture on spine CT image using YOLOR

Example

Example

Other models

Method

Vertebral compression fractures caused by osteoporosis are one of the reasons of pain and disability in the elderly. The early detection and treatment are very important. Although MRI can effectively diagnose symptoms, it requires a higher diagnostic cost. CT image has a lower diagnostic cost compared to MRI. However, CT detecting vertebral fractures is not as accurate as MRI. In order to speed up diagnosis and grasp the golden period of treatment, we proposed a YOLO-based object detection method to localize old and fresh fractures on spine CT images. We replaced the backbone of CSPDarknet53 in the native YOLOR model with MobileViT and EfficientNet_NS. Three YOLOR models with different backbone are trained separately, and finally the three YOLOR models are ensemble to improve the ability of feature extraction. Experimental results show that the accuracy of the three YOLOR models are 89%, 89.8%, and 89.2%, respectively. On this basis, these three improved networks are replaced convolution layer by Involution layer and integrated by the model ensemble method, the accuracy rate is increased to 93.4%. The proposed method achieves the purpose of being fast and accurate, and provides good advice and reference for physicians.

Experiment result

Model Backbone Precision Recall APfresh APold mAP@0.5
EfficientDet EfficientNetB0 17.5% 96.2% 87.7% 83% 85.4%
RetinaNet ResNet50 38.2% 94% 90.1% 82.3% 86.2%
YOLOv4 CSPDarknet53 53.4% 92.2% 92% 84.6% 88.4%
Scaled-YOLOv4 CSPDarknet53 62% 90.4% 93.2% 84.6% 88.9%
YOLOR CSPDarknet53 65% 91.1% 92.6% 85.4% 89%
YOLOR MobileViT 60.6% 92.1% 92.9% 86.7% 89.8%
YOLOR EfficientNet_NS 69.1% 89.9% 92.9% 85.6% 89.2%
YOLOR CSPDarknet53invo 71.3% 91.8% 93.7% 88.2% 90.9%
YOLOR MobileViTinvo 61.8% 92.2% 93.1% 87.5% 90.3%
YOLOR EfficientNet_NSinvo 65.6% 91.1% 92.7% 86.3% 89.5%
YOLOR Ensemble 63.4% 95.1% 95.4% 91.5% 93.4%

Environment

  • Python >= 3.7
  • Pytorch >= 1.7.0
pip install -r requirements.txt

Dataset

DICOM to BMP

python utils/preprocess.py dicom2img datasets/dicoms datasets/bmps --HU

Labeling

You can use LabelImg label your data, it uses YOLO format with .txt extension:
<object-class> <x_center> <y_center> <width> <height>
Please see Train Custom Data tutorial of YOLOv5 for more details
Images without objects can be used as background images for training and it doesn't require labels.

Data structure

datasets/
    -project_name/
        -images/
            -train/
                -*.bmp(or others format)
            -val/
                -*.bmp(or others format)
            -test/
                -*.bmp(or others format)
        -labels/
            -train/
                -*.txt
            -val/
                -*.txt
            -test/
                -*.txt
# for example
datasets/
    -fracture/
        -images/
            -train/
                -00001.bmp
                -00002.bmp
                -00003.bmp
            -val/
                -00004.bmp
            -test/
                -00005.bmp
        -labels/
            -train/
                -00001.txt
                -00002.txt
            -val/
                -00004.txt
            -test/
                -00005.txt

Training

  • Original YOLOR (CSPDarknet53)
python train.py --data data/spine.yaml --cfg models/spine_yolor-p6.yaml --img-size 640 --weights yolor_p6.pt --device 0 --batch 32 --cache --epochs 300 --name yolor_p6
  • MobileViT
python train.py --data data/spine.yaml --cfg models/spine_yolor-mobileViT.yaml --img-size 640 --weights yolor_p6.pt --device 0 --batch 32 --cache --epochs 300 --name yolor_mobilevit
  • Efficient_NS
python train.py --data data/spine.yaml --cfg models/spine_yolor-efficientB2ns.yaml --img-size 640 --weights yolor_p6.pt --device 0 --batch 32 --cache --epochs 300 --name yolor_efficient_ns

Track training

use wandb

pip install wandb

After the installation is complete, the training command is the same as above, but you need to enter the API Key in the terminal. Key can be obtained from https://wandb.ai/authorize.
Please see Weights & Biases with YOLOv5 for more details.

Evaluation

  • Single model
python test.py --data data/spine.yaml --weights yolor_p6.pt --batch 16 --img-size 640 --task test --device 0
  • Model ensembl
python test.py --data data/spine.yaml --weights yolor_p6.pt yolor_mobilevit.pt yolor_efficient_ns.pt --batch 16 --img-size 640 --task test --device 0

Detection

  • Single model
# --source can detect from different sources
python detect.py --source datasets/images/fracture.jpg --weights yolor_p6.pt --img-size 640 --device 0 --save-txt
                          0 # webcam
                          img.jpg  # image
                          vid.mp4  # video
                          path/  # directory
  • Model ensemble
python detect.py --source datasets/images/fracture.jpg --weights yolor_p6.pt yolor_mobilevit.pt yolor_efficient_ns.pt --img-size 640 --device 0 --save-txt

Second stage classifier

Using a second stage classifier improves model accuracy and reduces false positives, but increases detection time
You can add --classifier--classifier-weights--classifier-size--classifier-thres after the evaluation and detection commands.

More information on classifiers can be found here

# for example
# evaluation
python test.py --data data/spine.yaml --weights yolor_p6.pt --batch 16 --img-size 640 --task test --device 0 --classifier --classifier-weights model_best.pth.tar --classifier-size 96 --classifier-thres 0.6

# detect
python detect.py --source datasets/images/fracture.jpg --weights yolor_p6.pt --img-size 640 --device 0 --save-txt --classifier --classifier-weights model_best.pth.tar --classifier-size 96 --classifier-thres 0.6
Command Description
classifier Enable classifier
classifier-weights Classifier weight path
classifier-size Input image size of classifier
classifier-thres Change the threshold for detection classes. When the classification probability exceeds this threshold, it means that the classifier is highly confident, so the original detection class is changed to the classification class.

UI

UI

Run UI

python ui.py

For more usage, please refer to gradio

Reference