Let us congratulate Assistant Professor Liu Jun and his PhD students – Xu Li and Li Tianjiao for the acceptance of their papers by CVPR 2021. CVPR is a premier AI conference which according to Google Scholar, CVPR ranks 5th among all subjects (https://scholar.google.com/citations?view_op=metrics_intro), and ranks 1st in Engineering and Computer Science discipline (https://scholar.google.com/citations?view_op=top_venues&vq=eng).

Assistant Professor Liu Jun has a total of five papers accepted to CVPR 2021 (http://cvpr2021.thecvf.com/), in which two of them which are datasets that would potentially have larger impact. Please see details below.

1. SUTD TrafficQA dataset:

Intelligent transportation has been receiving increasing attention recently, and for the applications, such as assisted driving, violation detection, and congestion forecasting, accurate and efficient cognition and reasoning over the traffic events captured by video cameras is extremely important. The great success of deep learning in many areas demonstrated that well-designed datasets are crucial for the development, adaptation and evaluation of different data-driven approaches. This indicates the significance of creating comprehensive and challenging benchmarks for video causal reasoning and cognitive development of models, that explore the underlying causal structures of various traffic events. Considering the lack of representative datasets in this domain, Assistant Professor Liu Jun’s Group introduced a novel dataset, SUTD-TrafficQA (https://github.com/SUTDCV/SUTD-TrafficQA), to facilitate the research of causal reasoning in complex traffic scenarios. To help develop models for addressing several major and concerning issues in intelligent transportation, six challenging reasoning tasks were designed, which require exploring the complex causal structures within the inference process of the traffic events. These tasks correspond to various traffic scenarios involving both road-agents and surroundings, and the models are required to forecast future events, infer past situations, explain accident causes, provide preventive advice, and so on.

Paper Title:
SUTD-TrafficQA: A Question Answering Benchmark and an Efficient Network for Video Reasoning over Traffic Events

Paper Link:
https://www.researchgate.net/profile/Li-Xu-134/publication/350432154_SUTD-TrafficQA_A_Question_Answering_Benchmark_and_an_Efficient_Network_for_Video_Reasoning_over_Traffic_Events/links/605f1ae492851cd8ce6cad11/SUTD-TrafficQA-A-Question-Answering-Benchmark-and-an-Efficient-Network-for-Video-Reasoning-over-Traffic-Events.pdf

The first author is the first-year PhD student:
Xu Li

This work has been covered by OperaNews:
https://www.dailyadvent.com/news/2497f83762bf1e9aaf516010fac7b589-TrafficQA-A-Question-Answering-Benchmark-and-an-Efficient-Network-for-Video-Reasoning-over-Traffic-Events

2. UAV-Human dataset:

Given the flexibility and capability of long-range tracking, unmanned aerial vehicles (UAVs) equipped with cameras are often used to collect information from remote for the scenarios where it is either impossible or not sensible to use ground cameras. One particular area where UAVs are often deployed is human behavior understanding and surveillance in the wild, where video sequences of human subjects can be collected for analysis, and for subsequent decision making. Compared to videos collected by common ground cameras, the video sequences captured by UAVs generally present more diversified yet unique viewpoints, more obvious motion blurs, and more varying resolutions of the subjects, owing to the fast motion and continuously changing attitudes and heights of the UAVs during flight. These factors lead to significant challenges in UAV-based human behavior understanding, clearly requiring the design and development of human behavior understanding methods specifically taking the unique characteristics of UAV application scenarios into consideration. To promote the research in this area, and facilitate the community to develop, adapt, and evaluate various types of advanced methods for UAV-based human behavior understanding, Assistant Professor Liu Jun’s Group introduced the first large benchmark, UAV-Human (https://github.com/SUTDCV/UAV-Human), in this domain. To construct this benchmark, video samples were collected by flying UAVs equipped with multiple sensors in both daytime and night-time, over three different months, and across multiple rural districts and cities, which thus brings a large number of video samples covering extensive diversities w.r.t human subjects, data modalities, capturing environments, and UAV flying attitudes and speeds, etc.

Paper Title:
UAV-Human: A Large Benchmark for Human Behavior Understanding with Unmanned Aerial Vehicles

Paper Link:
https://www.researchgate.net/profile/Tianjiao-Li-13/publication/350558689_UAV-Human_A_Large_Benchmark_for_Human_Behavior_Understanding_with_Unmanned_Aerial_Vehicles/links/6065f77192851c91b1985b12/UAV-Human-A-Large-Benchmark-for-Human-Behavior-Understanding-with-Unmanned-Aerial-Vehicles.pdf

The first author is the first-year PhD student:
Li Tianjiao

3. Besides the aforementioned two works, another three papers on human-centric iamge/video analysis were also accepted to CVPR 2021.