Article Preview
TopIntroduction
The rapid advancement of artificial intelligence has significantly elevated video content creation technologies, resulting in tens of millions of new videos being uploaded to online platforms daily (Taleb & Abbas, 2022; Abbas et al., 2021). Given this vast volume of content, users urgently desire to see highlights or retrieve precise frames in a video that are most pertinent to a given textual query, allowing them to quickly skip to relevant video segments (Hamza et al., 2022; Sahoo & Gupta, 2021). In this paper, we focus on two video understanding tasks: highlight detection (HD) and temporal grounding (TG), as depicted in Fig. 1. Given a video paired with its corresponding natural language query, the objective of HD is to predict highlights for each video clip (Y. Liu et al., 2022). TG aims to retrieve all spans in a video that are most relevant to the query, where each span consists of a start and end clip (Gao et al., 2017). Since the goal of both tasks is to find the most appropriate clip, recent work (Lei et al., 2021) proposes the QVHighlights dataset to conduct HD and TG concurrently.
Figure 1. A depiction of HD&TG. Given a video paired with its corresponding textual query, the goal of HD&TG is to predict frame-wise saliency scores and locate all the most relevant spans simultaneously
Figure 2. Comparison between Moment-DETR (a) and QD-Net (b)
The primary challenge of the HD&TG task lies in effectively generating cross-modal features that contain query-related information, since such features are utilized to predict highlights and locate the query-matched spans. Inspired by DETR (Carion et al., 2020), Moment-DETR (Lei et al., 2021) designed a transformer encoder-decoder pipeline to tackle this challenge, as shown in Fig. 2 (a). However, Moment-DETR opts to directly concatenate video and text for coarse fusion in the encoder. This approach mixes intra-modal contextual modeling with cross-modal feature interaction. When the similarity between video frames far surpasses the video-query similarity, the resulting cross-modal features are irrelevant to the query, leading to diminished performance. Moreover, tasks like object detection (OD) and TG both necessitate decoder-based localization. Recent DETR-based research (S. Liu et al., 2022) indicates that utilizing dynamic bounding box anchors as queries within the decoder helps alleviate the problem of slow convergence in OD training. Yet, Moment-DETR solely employs learnable embeddings in the decoder and lacks adequate temporal span modeling, which hinders convergence speed and accuracy for a given TG task.
In this paper, we newly propose a HD&TG model named QD-Net (Query-guided refinement and Dynamic spans Network) to tackle the above issues. As shown in Figure 2(b), QD-Net decouples the feature encoding and interaction processes using a query-guided refinement module. This module fuses video and text tokens, which produce query-relevant cross-modal features. To capture intra-modal context from the global perspective, we introduce the straightforward yet efficient PoolFormer (Yu et al., 2022), which is applied to both visual and text encoders. In addition, we design a span decoder, which can more explicitly associate learnable embeddings with predicted span positions and speed up training convergence for the TG task. Specifically, the decoder contains learnable 2D spans that are dynamically updated at each layer, and their size can modulate the cross-attention weights within the decoder. To demonstrate the superiority of QD-Net, we execute comprehensive experiments and ablations on three publicly accessible datasets (QVHighlights, TVSum, and Charades-STA). The results reveal that QD-Net outperforms current state-of-the-art (SOTA) approaches. Notably, on the QVHighlights dataset, our model scores 61.87 HD-HIT@1 and 61.88 TG-mAP@0.5, showing gains of +1.88 and +8.05 over the SOTA method. In summary, our principal contributions include: