## 摘要

Temporal relational reasoning, the ability to link meaningful transformations of objects or entities over time, is a fundamental property of intelligent species. In this paper, we introduce an effective and interpretable network module, the Temporal Relation Network (TRN), designed to learn and reason about temporal dependencies between video frames at multiple time scales. We evaluate TRN-equipped networks on activity recognition tasks using three recent video datasets - Something-Something, Jester, and Charades - which fundamentally depend on temporal relational reasoning. Our results demonstrate that the proposed TRN gives convolutional neural networks a remarkable capacity to discover temporal relations in videos. Through only sparsely sampled video frames, TRN-equipped networks can accurately predict human-object interactions in the Something-Something dataset and identify various human gestures on the Jester dataset with very competitive performance. TRN-equipped networks also outperform two-stream networks and 3D convolution networks in recognizing daily activities in the Charades dataset. Further analyses show that the models learn intuitive and interpretable visual common sense knowledge in videos.

## 什么是时间关系推理

1. 基于时间关系推理：不可以打乱时间顺序。可区分为短时关系和长时关系（由多个短时关系组成）
2. 基于外观以及行为：可以打乱时间顺序

1. 戳破罐子使其倒塌；
2. 堆饼干；
3. 整理衣柜；
4. 竖起大拇指。

## 数据集分类

1. 基本上依赖于时间关系推理：Something-Something/Jester/Charades
2. 仅仅依赖于动作和行为：UCF101/Sport1M/THUMOS

## Temporal Relation Networks

TRN(temporal relation network)模块受A simple neural network module for relational reasoning启发，基于TSN，采集不同时间尺度的视频信息（）

### 时间关系定义

$T_{2}(V) = h_{φ}(\sum_{i<j}g_{\theta}(f_{i}, f_{j}))$

• $$V$$指的是大小为$$n$$的有序帧序列$$V={f_{1}, f_{2}, ..., f_{n}}$$
• $$f_{i}$$表示视频序列中的第$$i$$
• $$h_{φ}$$$$g_{\theta}$$表示特征提取函数

$T_{3}(V) = h_{φ}^{'}(\sum_{i<j<k}g_{\theta}^{'}(f_{i}, f_{j}, f_{k}))$

### 多尺度时间关系

$MT_{N}(V) = T_{2}(V) + T_{3}(V) ... + T_{N}(V)$

• $$T_{d}$$捕获了$$d$$张有序帧之间的时间关系
• 每个$$T_{d}$$均有自个的$$h_{φ}$$$$g_{\theta}$$（也就是说，使用独立的神经网络计算2帧/3帧/.../N帧关系