Skip to content

Latest commit

 

History

History
executable file
·
32 lines (21 loc) · 2.54 KB

CONTRIBUTING.md

File metadata and controls

executable file
·
32 lines (21 loc) · 2.54 KB

Contributing

Setup

Using Docker

Docker is the recommended solution for running this project, as GPU installations of Tensorflow can be tricky to setup, especially on Windows. The steps below are for using the Docker image through VS Code, but the image can also be run directly from the command line.

  1. Install the Dev Containers extension in VS Code.
  2. If Docker Desktop is not already installed on your machine, run Dev Containers: Install Docker from the command palette. The installation process will continue outside of VS Code and may take a few minutes and require a system restart.
  3. For Windows users, install and enable WSL 2 for Docker by following this guide. The basic steps are outlined below.
    1. Install WSL 2 from the Microsoft Store.
    2. Run wsl --install -d Ubuntu in a terminal to install Ubuntu. Other distros may work, but haven't been tested.
    3. Open Docker Desktop settings and enable Ubuntu via Resources > WSL Integration.
    4. Enter a WSL shell by running wsl from a terminal. Run docker --version to confirm that Docker is available.
  4. Run Dev Containers: Clone Repository in Container Volume... from the command palette and enter the Github URL of this repository. This will clone the repository into a Docker volume and open it in a containerized VS Code instance, which may take a few minutes the first time. A folder will be created at ~/naip-cnn in your local filesystem to store data and models generated by the project. On subsequent runs, you can re-open the image from the Remote Explorer tab in VS Code.

Without Docker

If you do not want to use Docker, you can follow this guide and then use pip install -e . to install the required packages.

Checking Your Setup

Once your environment is setup, you can check that Tensorflow is working and GPU support is enabled by running by running the code below:

import tensorflow as tf

assert tf.config.list_physical_devices('GPU')

If you have multiple GPUs and want to use a specific one, you can set the CUDA_VISIBLE_DEVICES environment variable to the index of the GPU you want to use, e.g. export CUDA_VISIBLE_DEVICES=0 for the first GPU.