Skip to content

Latest commit

 

History

History
71 lines (52 loc) · 3.5 KB

README.md

File metadata and controls

71 lines (52 loc) · 3.5 KB

About

Codes for paper Automatic Knee Osteoarthritis Diagnosis from Plain Radiographs: A Deep Learning-Based Approach.

Tiulpin, A., Thevenot, J., Rahtu, E., Lehenkari, P., & Saarakkala, S. (2018). Automatic Knee Osteoarthritis Diagnosis from Plain Radiographs: A Deep Learning-Based Approach. Scientific reports, 8(1), 1727.

Background

Osteoarthritis (OA) is the 11th highest disability factor and it is associated with the cartilage and bone degeneration in the joints. The most common type of OA is the knee OA and it is causing an extremly high economical burden to the society while being difficult to diagnose. In this study we present a novel Deep Learning-based clinically applicable approach to diagnose knee osteoarthritis from plain radiographs (X-ray images) outperforming existing approaches.

Benchmarks and how-to-run

Here we present the training codes and the pretrained models from each of our experiments. Please, see the paper for more details.

To train the networks, we used Ubuntu 14.04, CUDA 8.0 and CuDNN v.6. Below please find the other dependencies which need to be installed:

  • Python 3.6
  • pytorch < 0.4.0 with CUDA support
  • PIL
  • matplotlib
  • Jupyter Notebook (to work with attention maps)
  • tqdm
  • visdom
  • numpy
  • termcolor
  • torchvision

We recommend to create a virtual environment with the aforementioned packages. To run the training, execute the corresponding bash files (validation is visualized in visdom). Before running, edit the begining of the file to activate your virtual environment.

However, you can run the codes as they are, just use the parameters fixed in the bash scripts.

Attention maps examples

Our model learns localized radiological findings as we imposed prior anatomical knowledge to teh network architecture. Here are some examples of attention maps and predictions (Kellgren-Lawrence grade 2 ground truth):

What is in here

  • Codes for the main experiements (Supplementary information of the article)
  • Pre-trained models
  • Datasets generation scripts
  • MOST and OAI cohorts bounding box annotations

Inference for your own data

To run the inference on your own DICOM data, do the following:

  1. Create a conda environment deep_knee using the script create_conda_env.sh.
  2. Fetch our repository KneeLocalizer and get the file with bounding boxes, which determine the locations of the knees on the image
  3. Use the script Dataset/crop_rois_your_dataset.py to create the 16bit png files of left and right knees. Please note: the left knee will be flipped to match the right one. The script needs to be executed within the created environment @ the stage 0.
  4. Use the script inference_own/predict.py to produce the file with gradings

License

This code is freely available only for research purpuses.

How to cite

@article{tiulpin2018automatic,
  title={Automatic Knee Osteoarthritis Diagnosis from Plain Radiographs: A Deep Learning-Based Approach},
  author={Tiulpin, Aleksei and Thevenot, J{\'e}r{\^o}me and Rahtu, Esa and Lehenkari, Petri and Saarakkala, Simo},
  journal={Scientific reports},
  volume={8},
  number={1},
  pages={1727},
  year={2018},
  publisher={Nature Publishing Group}
}