Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
lext authored Oct 28, 2017
1 parent 441845c commit 12ae44c
Showing 1 changed file with 8 additions and 8 deletions.
16 changes: 8 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,13 @@ Codes for paper **Automatic Knee Osteoarthritis Diagnosis from Plain Radiographs

## Background

Osteoarthritis (OA) is the 11th highest disability factor and it is associated with the cartilage and bone degeneration in the joints. The most common type of is knee OA and it is causing an extremly high economical burden to the society while being difficult to diagnose. In this study we present a novel Deep Learning-based approach to diagnose knee osteoarthritis from plain radiographs (X-ray images).
Osteoarthritis (OA) is the 11th highest disability factor and it is associated with the cartilage and bone degeneration in the joints. The most common type of OA is the knee OA and it is causing an extremly high economical burden to the society while being difficult to diagnose. In this study we present a novel Deep Learning-based clinically applicable approach to diagnose knee osteoarthritis from plain radiographs (X-ray images) outperforming existing approaches.

## Benchmarks and how-to-run

Here we present the training codes and the pretrained models from each of our experiments. Please, see the paper for more details.

To train the networks, you should install the following main dependencies:
To train the networks, we used Ubuntu 14.04, CUDA 8.0 and CuDNN v.6. Below please find the other dependencies which need to be installed:

* Python 3
* pytorch with CUDA support
Expand All @@ -24,9 +24,9 @@ To train the networks, you should install the following main dependencies:
* termcolor
* torchvision

We recommend to create a virtual environment with the aforemention packages. To run the training, run the corresponding bash files (validation is visualized in visdom). Before running, edit the beginnign of the file to activate your virtual environment.
We recommend to create a virtual environment with the aforemention packages. To run the training, execute the corresponding bash files (validation is visualized in visdom). Before running, edit the begining of the file to activate your virtual environment.

However, you can run the codes as they are and just use the parameters fixed in the bash scripts.
However, you can run the codes as they are, just use the parameters fixed in the bash scripts.

## Attention maps examples
Our model learns localized radiological findings as we imposed prior anatomical knowledge to teh network architecture. Here are some examples of attention maps and predictions (Kellgren-Lawrence grade 2 ground truth):
Expand All @@ -35,13 +35,13 @@ Our model learns localized radiological findings as we imposed prior anatomical

## TODO

- [x] Codes for the main experiements (Supplementary information)
- [x] Codes for the main experiements (Supplementary information of the article)
- [ ] Pre-trained models
- [ ] MOST and OAI cohorts bounding box annotations
- [ ] Pre-trained models
- [ ] Datasets generation scripts
- [ ] More benchmarks with the pre-trained models (e.g. DenseNets and Deeper ResNets)
- [ ] Better documentation

## License

This code is freely available only for research purpuses.
This code is freely available only for research purpuses.

0 comments on commit 12ae44c

Please sign in to comment.