You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I tried to perform inference on my own videos by simply putting those videos in the /visualization/videos folder, then running the provided scripts in this repo.
However, when loading the model Loading model from save/anet_tsp_pdvc/model-best.pth, my terminal shows this error:
visualization/output/r2plus1d_34-tsp_on_activitynet_stride_16/sample_vid.npy not exists, use zero padding.
all feature files of video sample-vid do not exist
Then the generated captions in dvc_results.json will just be talking about a black screen or white screen or credit scene. I assume this is due to the zero padding.
It seems that there is a problem when extracting features from my video(?) I am not too sure. Is there any step I might have missed or any step that was not included in the scripts?
Any help is appreciated. Thank you~
The text was updated successfully, but these errors were encountered:
Hi,
I tried to perform inference on my own videos by simply putting those videos in the /visualization/videos folder, then running the provided scripts in this repo.
However, when loading the model
Loading model from save/anet_tsp_pdvc/model-best.pth
, my terminal shows this error:Then the generated captions in
dvc_results.json
will just be talking about a black screen or white screen or credit scene. I assume this is due to the zero padding.It seems that there is a problem when extracting features from my video(?) I am not too sure. Is there any step I might have missed or any step that was not included in the scripts?
Any help is appreciated. Thank you~
The text was updated successfully, but these errors were encountered: