Time Is MattEr: Temporal Self-supervision for Video Transformers

Title
Time Is MattEr: Temporal Self-supervision for Video Transformers
Author
윤석민
Issue Date
2022-07-19
Publisher
Proceedings of Machine Learning Research
Citation
Proceedings of the 39 th International Conference on Machine Learning, v. 162, page. 25804-25816
Abstract
Understanding temporal dynamics of video is an essential aspect of learning better video representations. Recently, transformer-based architectural designs have been extensively explored for video tasks due to their capability to capture long-term dependency of input sequences. However, we found that these Video Transformers are still biased to learn spatial dynamics rather than temporal ones, and debiasing the spurious correlation is critical for their performance. Based on the observations, we design simple yet effective self-supervised tasks for video models to learn temporal dynamics better. Specifically, for debiasing the spatial bias, our method learns the temporal order of video frames as extra self-supervision and enforces the randomly shuffled frames to have low-confidence outputs. Also, our method learns the temporal flow direction of video tokens among consecutive frames for enhancing the correlation toward temporal dynamics. Under various video action recognition tasks, we demonstrate the effectiveness of our method and its compatibility with state-of-the-art Video Transformers.
URI
https://arxiv.org/abs/2207.09067https://repository.hanyang.ac.kr/handle/20.500.11754/191685
DOI
https://doi.org/10.48550/arXiv.2207.09067
Appears in Collections:
ETC[S] > 연구정보
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE