Skip to content

STAM: A SpatioTemporal Attention based Memory for Video Prediction, IEEE Transactions on Multimedia 2022

License

Notifications You must be signed in to change notification settings

ZhengChang467/STAM-TMM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

STAM (IEEE Transactions on Multimedia 2022)

Zheng Chang, Xinfeng Zhang, Shanshe Wang, Siwei Ma, Wen Gao.

Official PyTorch Code for "STAM: A SpatioTemporal Attention based Memory for Video Prediction" [paper]

Requirements

  • PyTorch 1.7
  • CUDA 11.0
  • CuDNN 8.0.5
  • python 3.6.7

Installation

Create conda environment:

    $ conda create -n STAM python=3.6.7
    $ conda activate STAM
    $ pip install -r requirements.txt
    $ conda install pytorch==1.7 torchvision cudatoolkit=11.0 -c pytorch

Download repository:

    $ git clone git@github.com:ZhengChang467/STAM_TMM.git

Unzip MovingMNIST Dataset:

    $ cd data
    $ unzip mnist_dataset.zip

Test

set --is_training to False in configs/mnist.py and run the following command:

    $ python STAM_run.py

Train

set --is_training to True in configs/mnist.py and run the following command:

    $ python STAM_run.py

We plan to share the train codes for other datasets soon!

Citation

Please cite the following paper if you feel this repository useful.

@article{chang2022stam,
  title={STAM: A SpatioTemporal Attention based Memory for Video Prediction},
  author={Chang, Zheng and Zhang, Xinfeng and Wang, Shanshe and Ma, Siwei and Gao, Wen},
  journal={IEEE Transactions on Multimedia},
  year={2022},
  publisher={IEEE}
}

License

See MIT License

About

STAM: A SpatioTemporal Attention based Memory for Video Prediction, IEEE Transactions on Multimedia 2022

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages