Docker is the recommended solution for running this project, as GPU installations of Tensorflow can be tricky to setup, especially on Windows. The steps below are for using the Docker image through VS Code, but the image can also be run directly from the command line.
- Install the Dev Containers extension in VS Code.
- If Docker Desktop is not already installed on your machine, run
Dev Containers: Install Docker
from the command palette. The installation process will continue outside of VS Code and may take a few minutes and require a system restart. - For Windows users, install and enable WSL 2 for Docker by following this guide. The basic steps are outlined below.
- Install WSL 2 from the Microsoft Store.
- Run
wsl --install -d Ubuntu
in a terminal to install Ubuntu. Other distros may work, but haven't been tested. - Open Docker Desktop settings and enable Ubuntu via
Resources > WSL Integration
. - Enter a WSL shell by running
wsl
from a terminal. Rundocker --version
to confirm that Docker is available.
- Run
Dev Containers: Clone Repository in Container Volume...
from the command palette and enter the Github URL of this repository. This will clone the repository into a Docker volume and open it in a containerized VS Code instance, which may take a few minutes the first time. A folder will be created at~/naip-cnn
in your local filesystem to store data and models generated by the project. On subsequent runs, you can re-open the image from theRemote Explorer
tab in VS Code.
If you do not want to use Docker, you can follow this guide and then use pip install -e .
to install the required packages.
Once your environment is setup, you can check that Tensorflow is working and GPU support is enabled by running by running the code below:
import tensorflow as tf
assert tf.config.list_physical_devices('GPU')
If you have multiple GPUs and want to use a specific one, you can set the CUDA_VISIBLE_DEVICES
environment variable to the index of the GPU you want to use, e.g. export CUDA_VISIBLE_DEVICES=0
for the first GPU.