From 0380cc8f0cde5a35df59bffca9a43e34f24d64a8 Mon Sep 17 00:00:00 2001 From: Egor Panfilov Date: Fri, 31 Aug 2018 11:19:04 +0300 Subject: [PATCH] Fixed typos in readme --- README.md | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/README.md b/README.md index b6c67ac..055702c 100644 --- a/README.md +++ b/README.md @@ -5,26 +5,26 @@ Codes for paper **Automatic Knee Osteoarthritis Diagnosis from Plain Radiographs ## Background -Osteoarthritis (OA) is the 11th highest disability factor and it is associated with the cartilage and bone degeneration in the joints. The most common type of OA is the knee OA and it is causing an extremly high economical burden to the society while being difficult to diagnose. In this study we present a novel Deep Learning-based clinically applicable approach to diagnose knee osteoarthritis from plain radiographs (X-ray images) outperforming existing approaches. +Osteoarthritis (OA) is the 11th highest disability factor and it is associated with the cartilage and bone degeneration in the joints. The most common type of OA is the knee OA and it is causing an extremely high economical burden to the society while being difficult to diagnose. In this study we present a novel Deep Learning-based clinically applicable approach to diagnose knee osteoarthritis from plain radiographs (X-ray images) outperforming existing approaches. ## Benchmarks and how-to-run -Here we present the training codes and the pretrained models from each of our experiments. Please, see the paper for more details. +Here we present the training codes and the pre-trained models from each of our experiments. Please, see the paper for more details. -To train the networks, we used Ubuntu 14.04, CUDA 8.0 and CuDNN v.6. For convenience, we have implemented a conda environment. +To train the networks, we used Ubuntu 14.04, CUDA 8.0 and CuDNN v6. For convenience, we have implemented a conda environment. Simply install it using the script `create_conda_env.sh` and activate it as `source activate deep_knee`. To run the training, execute the corresponding bash files (validation is visualized in visdom). However, you can run the codes as they are, just use the parameters fixed in the bash scripts. ## Attention maps examples -Our model learns localized radiological findings as we imposed prior anatomical knowledge to teh network architecture. Here are some examples of attention maps and predictions (Kellgren-Lawrence grade 2 ground truth): +Our model learns localized radiological findings as we imposed prior anatomical knowledge to the network architecture. Here are some examples of attention maps and predictions (Kellgren-Lawrence grade 2 ground truth): ## What is in here -- [x] Codes for the main experiements (Supplementary information of the article) +- [x] Codes for the main experiments (Supplementary information of the article) - [x] Pre-trained models - [x] Datasets generation scripts - [x] MOST and OAI cohorts bounding box annotations @@ -36,12 +36,12 @@ To run the inference on your own DICOM data, do the following: 0. Create a conda environment `deep_knee` using the script `create_conda_env.sh`. 1. Fetch our repository [KneeLocalizer](https://github.com/MIPT-Oulu/KneeLocalizer) and get -the file with bounding boxes, which determine the locations of the knees on the image -2. Use the script `Dataset/crop_rois_your_dataset.py` to create the 16bit png files of left and right knees. -Please note: the left knee will be flipped to match the right one. -The script needs to be executed within the created environment @ the stage 0. +the file with bounding boxes, which determine the locations of the knees in the image +2. Use the script `Dataset/crop_rois_your_dataset.py` to create the 16bit png files of the left and right knees. +Please note: the image of the left knee will be flipped to match the right one. +The script needs to be executed within the environment created in step 0. 3. Run `git lfs install && git lfs pull` to fetch the pre-trained models. -4. Use the script `inference_own/predict.py` to produce the file with gradings +4. Use the script `inference_own/predict.py` to produce the file with KL gradings. ## License