Skip to content

Multiclass Skin lesion localization and Detection with YOLOv7-XAI Framework with explainable AI

License

Notifications You must be signed in to change notification settings

Nirmala-research/YOLOv7-XAI

Repository files navigation

Official YOLOv7-XAI

Explainable AI (XAI) aims to make machine learning models transparent by providing clear, understandable explanations of their decisions. This enhances trust, accountability, and the ability to debug and improve AI systems.

The implemented workflow architecture of our research work is published & available online Download here Link

Class details can be found in data.yaml file

#Results -Training-batch

Example Image PWC Hugging Face Spaces Open In Colab arxiv.org

Performance

HAM10000 Dataset

Model AK BCC BKL DF MEL NV SCC VASC
YOLOv7-XAI 98.9 97.4 96.5 96.7 97.4 96.0 96.4 95.0
YOLOv7 95.8 94.9 96.0 94.1 94.2 92.3 95.2 94.5
YOLOv6 94.8 95.3 94.4 93.9 93.4 90.8 94.4 93.0

Example Image

Installation

Docker environment (recommended)

Expand
# create the docker container, you can change the share memory size if you have more.
nvidia-docker run --name YOLOv7-XAI -it -v your_coco_path/:/coco/ -v your_code_path/:/YOLOv7-XAI --shm-size=64g nvcr.io/nvidia/pytorch:21.08-py3

# apt install required packages
apt update
apt install -y zip htop screen libgl1-mesa-glx

# pip install required packages
pip install seaborn thop

# go to code folder
cd /YOLOv7-XAI
python test.py --data data/data.yaml --img 640 --batch 32 --conf 0.001 --iou 0.65 --device 0 --weights YOLOv7-XAI.pt --name YOLOv7-XAI640_val

To measure accuracy, download

Training

Data preparation

labelimg annotation tool

Single GPU training

# train p5 models
python train.py --workers 8 --device 0 --batch-size 32 --data data/data.yaml --img 640 640 --cfg cfg/training/YOLOv7-XAI.yaml --weights '' --name YOLOv7-XAI --hyp data/hyp.scratch.p5.yaml

# train p6 models
python train_aux.py --workers 8 --device 0 --batch-size 16 --data data/data.yaml --img 1280 1280 --cfg cfg/training/YOLOv7-XAI-w6.yaml --weights '' --name YOLOv7-XAI-w6 --hyp data/hyp.scratch.p6.yaml

Multiple GPU training

# train p5 models
python -m torch.distributed.launch --nproc_per_node 4 --master_port 9527 train.py --workers 8 --device 0,1,2,3 --sync-bn --batch-size 128 --data data/data.yaml --img 640 640 --cfg cfg/training/YOLOv7-XAI.yaml --weights '' --name YOLOv7-XAI --hyp data/hyp.scratch.p5.yaml

# train p6 models
python -m torch.distributed.launch --nproc_per_node 8 --master_port 9527 train_aux.py --workers 8 --device 0,1,2,3,4,5,6,7 --sync-bn --batch-size 128 --data data/data.yaml --img 1280 1280 --cfg cfg/training/YOLOv7-XAI-w6.yaml --weights '' --name YOLOv7-XAI-w6 --hyp data/hyp.scratch.p6.yaml

Inference

On image:

python detect.py --weights YOLOv7-XAI.pt --conf 0.25 --img-size 640 --source inference/images/ISIC12399.jpg

Export

Pytorch to CoreML (and inference on MacOS/iOS) Open In Colab

Pytorch to ONNX with NMS (and inference) Open In Colab

python export.py --weights YOLOv7-XAI-tiny.pt --grid --end2end --simplify \
        --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640 --max-wh 640

Pytorch to TensorRT with NMS (and inference) Open In Colab

wget https://github.com/Nirmala-research/YOLOv7-XAI/releases/download/v0.1/YOLOv7-XAI-tiny.pt
python export.py --weights ./YOLOv7-XAI-tiny.pt --grid --end2end --simplify --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640
git clone https://github.com/Linaom1214/tensorrt-python.git
python ./tensorrt-python/export.py -o YOLOv7-XAI-tiny.onnx -e YOLOv7-XAI-tiny-nms.trt -p fp16

Pytorch to TensorRT another way Open In Colab

Expand

wget https://github.com/Nirmala-research/YOLOv7-XAI/releases/download/v0.1/YOLOv7-XAI-tiny.pt
python export.py --weights YOLOv7-XAI-tiny.pt --grid --include-nms
git clone https://github.com/Linaom1214/tensorrt-python.git
python ./tensorrt-python/export.py -o YOLOv7-XAI-tiny.onnx -e YOLOv7-XAI-tiny-nms.trt -p fp16

# Or use trtexec to convert ONNX to TensorRT engine
/usr/src/tensorrt/bin/trtexec --onnx=YOLOv7-XAI-tiny.onnx --saveEngine=YOLOv7-XAI-tiny-nms.trt --fp16

Tested with: Python 3.7.13, Pytorch 1.12.0+cu113

Acknowledgements

Expand