MS-TCN++: Multi-Stage Temporal Convolutional Network for Action Segmentation
Shijie Li1,2, Yazan Abu Farha2, Yun Liu1, Ming-Ming Cheng1, Juergen Gall2
1TKLNDST, CS, Nankai University 2Bonn University
1. Abstract
With the success of deep learning in classifying short trimmed videos, more attention has been focused on temporally segmenting and classifying activities in long untrimmed videos. State-of-the-art approaches for action segmentation utilize several layers of temporal convolution and temporal pooling. Despite the capabilities of these approaches in capturing temporal dependencies, their predictions suffer from over-segmentation errors. In this paper, we propose a multi-stage architecture for the temporal action segmentation task that overcomes the limitations of the previous approaches. The first stage generates an initial prediction that is refined by the next ones. In each stage we stack several layers of dilated temporal convolutions covering a large receptive field with few parameters. While this architecture already performs well, lower layers still suffer from a small receptive field. To address this limitation, we propose a dual dilated layer that combines both large and small receptive fields. We further decouple the design of the first stage from the refining stages to address the different requirements of these stages. Extensive evaluation shows the effectiveness of the proposed model in capturing long-range dependencies and recognizing action segments. Our models achieve state-of-the-art results on three datasets: 50Salads, Georgia Tech Egocentric Activities (GTEA), and the Breakfast dataset.
Source Code and pre-trained model: https://github.com/sj-li/MS-TCN2
2. Paper
- MS-TCN++: Multi-Stage Temporal Convolutional Network for Action Segmentation, Shijie Li, Yazan AbuFarha, Yun Liu, Ming-Ming Cheng, Juergen Gall, IEEE TPAMI, 2020. [pdf|中文版|code|project|]
@article{li2020ms, author={Shi-Jie Li and Yazan AbuFarha and Yun Liu and Ming-Ming Cheng and Juergen Gall}, journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, title={MS-TCN++: Multi-Stage Temporal Convolutional Network for Action Segmentation}, year={2020}, doi={10.1109/TPAMI.2020.3021756}, }
3. Applications
Compared with previous methods, our method achieves more than 10% improvement on three challenging datasets.
Arch | F1@{10,25,50} | Edit | Acc |
---|---|---|---|
Spatial CNN | 32.3, 27.1, 18.9 | 24.8 | 54.9 |
IDT+LM | 44.4, 38.9, 27.8 | 45.8 | 48.7 |
Bi-LSTM | 62.6, 58.3, 47.0 | 55.6 | 55.7 |
Dilated TCN | 52.2, 47.6, 37.4 | 43.1 | 59.3 |
ST-CNN | 55.9, 49.6, 37.1 | 45.9 | 59.4 |
TUnet | 59.3, 55.6, 44.8 | 50.6 | 60.6 |
ED-TCN | 68.0, 63.9, 52.6 | 52.6 | 64.7 |
TResNet | 69.2, 65.0, 54.4 | 60.5 | 66.0 |
TRN | 70.2, 65.4,56.3 | 63.7 | 66.9 |
TDRN+UNet | 69.6, 65.0, 53.6 | 62.6 | 66.1 |
TDRN | 72.9, 68.5, 57.2 | 66.0 | 68.1 |
LCDC+ED-TCN | 73.8, -,- | 66.9 | 72.1 |
MS-TCN | 76.3, 74.0, 64.5 | 67.9 | 80.7 |
MS-TCN++(sh) | 78.7, 76.6, 68.3 | 70.7 | 82.2 |
MS-TCN++ | 80.7, 78.5, 70.1 | 74.3 | 83.7 |
Arch | F1@{10,25,50} | Edit | Acc |
---|---|---|---|
Spatial CNN | 41.8, 36.0, 25.1 | – | 54.1 |
Bi-LSTM | 66.5, 59.0, 43.6 | – | 55.5 |
Dilated TCN | 58.8, 52.2, 42.2 | – | 58.3 |
ST-CNN | 58.7, 54.4 ,41.9 | – | 60.6 |
TUnet | 67.1, 63.7, 51.9 | 60.3 | 59.9 |
ED-TCN | 72.2, 69.3, 56.0 | – | 64.0 |
LCDC+ED-TCN | 75.4, -, – | 72.8 | 65.3 |
TResNet | 74.1, 69.9 ,57.6 | 64.3 | 65.8 |
TRN | 77.4, 71.3, 59.1 | 72.2 | 67.8 |
TDRN+UNet | 78.1, 73.8, 62.2 | 73.7 | 69.3 |
TDRN | 79.2, 74.4, 62.7 | 74.1 | 70.1 |
MS-TCN | 87.5, 85.4, 74.6 | 81.4 | 79.2 |
MS-TCN++(sh) | 88.2, 86.2, 75.9 | 83.0 | 79.7 |
MS-TCN++ | 88.8, 85.7, 76.0 | 83.5 | 80.1 |
Arch | F1@{10,25,50} | Edit | Acc |
---|---|---|---|
ED-TCN | -, -, – | – | 43.3 |
HTK | -, -, – | – | 50.7 |
TCFPN | -, -, – | – | 52.0 |
HTK(64) | -, -, – | – | 56.3 |
GRU | -, -, – | – | 60.6 |
GRU+length prior | -, -, – | – | 61.3 |
MS-TCN (IDT) | 58.2, 52.9, 40.8 | 61.4 | 65.1 |
MS-TCN (I3D) | 52.6, 48.1, 37.9 | 61.7 | 66.3 |
MS-TCN++ (I3D) (sh) | 63.3, 57.7, 44.5 | 64.9 | 67.3 |
MS-TCN++ (I3D) | 64.1, 58.6, 45.9 | 65.6 | 67.6 |
We show some segmentation results below: