This is a repository for a semantic segmentation inference API of a video using the BMW-IntelOpenVINO-Segmentation-Inference-API.
- OS:
- Ubuntu 20.04
- Windows 10 pro/enterprise
- Docker
To check if you have docker-ce installed:
docker --version
Use the following command to install docker on Ubuntu:
chmod +x install_prerequisites.sh && source install_prerequisites.sh
To install Docker on Windows, please follow the link.
In order to build the project run the following command from the project's root directory:
docker build -t video_inference_api -f docker/Dockerfile .
If you wish to deploy this API using docker, please issue the following run command.
To run the API, go the to the API's directory and run the following:
docker run -itv $(pwd)/data:/data -p <port_of_your_choice>:8080 video_inference_api
docker run -itv ${PWD}/data:/data -p <port_of_your_choice>:8080 video_inference_api
docker run -itv %cd%/data:/data -p <port_of_your_choice>:8080 video_inference_api
Make sure all containers are in the same network. Add the following option to the docker command:
--net <network_name>
Refer to Use bridge networks | Docker Documentation for more information about bridge networks
The <docker_host_port> can be any unique port of your choice.
The <network_name> is a bridged network name of your choice.
The API file will run automatically, and the service will listen to http requests on the chosen port.
To see all available endpoints, open your favorite browser and navigate to:
http://<machine_IP>:<docker_host_port>/docs
Perform inference on a video using the specified model and returns the resulting video as a response.
Refer to BMW-IntelOpenVINO-Segmentation-Inference-API for more information on how to add models