Skip to content

Commit

Permalink
Fixed typos in readme
Browse files Browse the repository at this point in the history
  • Loading branch information
soupault committed Aug 31, 2018
1 parent 6d130ac commit 0380cc8
Showing 1 changed file with 10 additions and 10 deletions.
20 changes: 10 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,26 +5,26 @@ Codes for paper **Automatic Knee Osteoarthritis Diagnosis from Plain Radiographs

## Background

Osteoarthritis (OA) is the 11th highest disability factor and it is associated with the cartilage and bone degeneration in the joints. The most common type of OA is the knee OA and it is causing an extremly high economical burden to the society while being difficult to diagnose. In this study we present a novel Deep Learning-based clinically applicable approach to diagnose knee osteoarthritis from plain radiographs (X-ray images) outperforming existing approaches.
Osteoarthritis (OA) is the 11th highest disability factor and it is associated with the cartilage and bone degeneration in the joints. The most common type of OA is the knee OA and it is causing an extremely high economical burden to the society while being difficult to diagnose. In this study we present a novel Deep Learning-based clinically applicable approach to diagnose knee osteoarthritis from plain radiographs (X-ray images) outperforming existing approaches.

## Benchmarks and how-to-run

Here we present the training codes and the pretrained models from each of our experiments. Please, see the paper for more details.
Here we present the training codes and the pre-trained models from each of our experiments. Please, see the paper for more details.

To train the networks, we used Ubuntu 14.04, CUDA 8.0 and CuDNN v.6. For convenience, we have implemented a conda environment.
To train the networks, we used Ubuntu 14.04, CUDA 8.0 and CuDNN v6. For convenience, we have implemented a conda environment.
Simply install it using the script `create_conda_env.sh` and activate it as `source activate deep_knee`.
To run the training, execute the corresponding bash files (validation is visualized in visdom).

However, you can run the codes as they are, just use the parameters fixed in the bash scripts.

## Attention maps examples
Our model learns localized radiological findings as we imposed prior anatomical knowledge to teh network architecture. Here are some examples of attention maps and predictions (Kellgren-Lawrence grade 2 ground truth):
Our model learns localized radiological findings as we imposed prior anatomical knowledge to the network architecture. Here are some examples of attention maps and predictions (Kellgren-Lawrence grade 2 ground truth):

<img src="https://github.com/lext/DeepKnee/blob/master/pics/15_2_R_1_1_1_3_1_0_own.jpg" width="260"/> <img src="https://github.com/lext/DeepKnee/blob/master/pics/235_2_R_3_3_0_0_1_1_own.jpg" width="260"/> <img src="https://github.com/lext/DeepKnee/blob/master/pics/77_2_R_2_0_0_0_0_1_own.jpg" width="260"/>

## What is in here

- [x] Codes for the main experiements (Supplementary information of the article)
- [x] Codes for the main experiments (Supplementary information of the article)
- [x] Pre-trained models
- [x] Datasets generation scripts
- [x] MOST and OAI cohorts bounding box annotations
Expand All @@ -36,12 +36,12 @@ To run the inference on your own DICOM data, do the following:

0. Create a conda environment `deep_knee` using the script `create_conda_env.sh`.
1. Fetch our repository [KneeLocalizer](https://github.com/MIPT-Oulu/KneeLocalizer) and get
the file with bounding boxes, which determine the locations of the knees on the image
2. Use the script `Dataset/crop_rois_your_dataset.py` to create the 16bit png files of left and right knees.
Please note: the left knee will be flipped to match the right one.
The script needs to be executed within the created environment @ the stage 0.
the file with bounding boxes, which determine the locations of the knees in the image
2. Use the script `Dataset/crop_rois_your_dataset.py` to create the 16bit png files of the left and right knees.
Please note: the image of the left knee will be flipped to match the right one.
The script needs to be executed within the environment created in step 0.
3. Run `git lfs install && git lfs pull` to fetch the pre-trained models.
4. Use the script `inference_own/predict.py` to produce the file with gradings
4. Use the script `inference_own/predict.py` to produce the file with KL gradings.

## License

Expand Down

0 comments on commit 0380cc8

Please sign in to comment.