This pipeline example uses OpenCV to convert videos and images into edge-detected frames. The final output is a /collage
folder containing a static html page that you can download and open to view the original and traced content side-by-side.
- If the videos are not in
.mp4
format (e.g,.mov
), they are converted by thevideo_mp4_converter
pipeline before being passed to theimage_flattener
pipeline. Otherwise, they are passed directly to theimage_flattener
pipeline. - Images from the
image_flattener
output repo andraw_videos_and_images
input repo are processed by theimage-tracer
pipeline. - Frames from the
image_flattener
pipeline are combined by themovie_gifer
pipeline to create gifs. - All content is re-shuffled into two folders (
edges
andoriginals
) by thecontent_shuffler
pipeline. - The shuffled content is then used by the
content_collager
pipeline to create a collage of the original and traced content using a static html page that you can download and open.
gh repo clone lbliii/opencv-video-to-frametrace
cd opencv-video-to-frametrace
pachctl create project video-to-frame-traces
pachctl config update context --project video-to-frame-traces
pachctl create repo raw_videos_and_images
pachctl create pipeline -f 1_convert_videos/video_mp4_converter.yaml
pachctl create pipeline -f 2_flatten_images/image_flattener.yaml
pachctl create pipeline -f 3_trace_images/image_tracer.yaml
pachctl create pipeline -f 4_gif_images/movie_gifer.yaml
pachctl create pipeline -f 5_shuffle_content/content_shuffler.yaml
pachctl create pipeline -f 6_collage_content/content_collager.yaml
pachctl put file raw_videos_and_images@master:liberty.png -f https://raw.githubusercontent.com/pachyderm/docs-content/main/images/opencv/liberty.jpg
pachctl put file raw_videos_and_images@master:robot.png -f https://raw.githubusercontent.com/pachyderm/docs-content/main/images/opencv/robot.jpg
By default, when you first start up an instance, the default
project is attached to your active context. Create a new project and set the project to your active pachctl context to avoid having to specify the project name (e.g., --project video-to-frame-traces
) in each command.
pachctl create project video-to-frame-traces
pachctl config update context --project video-to-frame-traces
At the top of our DAG, we'll need an input repo that will store our raw videos and images.
pachctl create repo raw_videos_and_images
We want to make sure that our DAG can handle videos in multiple formats, so first we'll create a pipeline that will:
- skip images
- skip videos already in the correct format (
.mp4
) - convert videos to
.mp4
format
The converted videos will be made available to the next pipeline in the DAG via the video_mp4_converter
repo by declaring in the user code to save the converted images to /pfs/out/
.
pachctl create pipeline -f 1_convert_videos/video_mp4_converter.yaml
Next, we'll create a pipeline that will flatten the frames of the videos into individual .png
images. Like the previous pipeline, the user code outputs the frames to /pfs/out
so that the next pipeline in the DAG can access them in the image_flattener
repo.
pachctl create pipeline -f 2_flatten_images/image_flattener.yaml
Next, we'll create a pipeline that will trace the edges of the images. This pipeline will take a union
of two inputs:
-
the
image_flattener
repo, which contains the flattened images from the previous pipeline -
the
raw_videos_and_images
repo, which contains the original images that didn't need to be processed
pachctl create pipeline -f 3_trace_images/image_tracer.yaml
Next, we'll create a pipeline that will create two gifs:
- a gif of the original video's flattened frames (from the
image_flattener
output repo) - a gif of the video's traced frames (from the
image_tracer
output repo)
pachctl create pipeline -f 4_gif_images/movie_gifer.yaml
Next, we'll create a pipeline that will re-shuffle the content from the previous pipelines into two folders:
edges
: contains the traced images and gifsoriginals
: contains the original images and gifs
This helps us keep the content organized for easy access and manipulation in the next pipeline.
pachctl create pipeline -f 5_shuffle_content/content_shuffler.yaml
Finally, we'll create a pipeline that will create a static html page that you can download and open to view the original and traced content side-by-side.
pachctl create pipeline -f 6_collage_content/content_collager.yaml
Now that we have our DAG set up, we can add some videos and images to the raw_videos_and_images
repo to see the pipeline in action.
pachctl put file raw_videos_and_images@master: -f
pachctl put file raw_videos_and_images@master:liberty.png -f https://raw.githubusercontent.com/pachyderm/docs-content/main/images/opencv/liberty.jpg
pachctl put file raw_videos_and_images@master:robot.png -f https://raw.githubusercontent.com/pachyderm/docs-content/main/images/opencv/robot.jpg
This is based off of Reid's Pachd_Pipelines repo, which extends the basic OpenCV example to support the conversion of videos to jpg frames.