Active Object Detection with Knowledge Aggregation and Distillation from Large Models

Wangxuan Institute of Computer Technology, Peking University
CVPR2024

*Corresponding Author
MY ALT TEXT

An example of state-change carrot. Active objects detection (state change carrots) is difficult, as there are (1) visual changes can be subtle between the carrot undergoing state-change or not, and multiple distractors, (2) intra-class visual appearance variance for the carrot under state changes is large. To achieve accurate detection, we propose to construct triple priors to provide hints for the model, including semantic interaction priors, fine- grained visual priors, and spatial priors of active objects.

Abstract

Accurately detecting active objects undergoing state changes is essential for comprehending human interactions and facilitating decision-making. The existing methods for active object detection (AOD) primarily rely on visual appearance of the objects within input, such as changes in size, shape and relationship with hands. However, these visual changes can be subtle, posing challenges, particularly in scenarios with multiple distracting no-change instances of the same category. We observe that the state changes are often the result of an interaction being performed upon the object, thus propose to use informed priors about object related plausible interactions (including semantics and visual appearance) to provide more reliable cues for AOD. Specifically, we propose a knowledge aggregation procedure to integrate the aforementioned informed priors into oracle queries within the teacher decoder, offering more object affordance commonsense to locate the active object. To streamline the inference process and reduce extra knowledge inputs, we propose a knowledge distillation approach that encourages the student decoder to mimic the detection capabilities of the teacher decoder using the oracle query by replicating its predictions and attention. Our proposed framework achieves state-of-the-art performance on four datasets, namely Ego4D, Epic-Kitchens, MECCANO, and 100DOH, which demonstrates the effectiveness of our approach in improving AOD.

Framework

MY ALT TEXT

Proposed Architecture: Knowledge Aggregation and Distillation (KAD). Our KAD architecture comprises two distinct detectors: the Vision-Based Detector (highlighted in orange, detailed in Section 3.1) and the Knowledge-Enhanced Detector (emphasized in green, elaborated in Section 3.3). Knowledge and concepts related to active object categories are systematically gathered and consolidated within the Knowledge Aggregator (shown in gray and positioned at the lower left). Best view in color.

Results

1. Comparisons with other methods on Ego4D. We bold the best results and underline the second best ones.

MY ALT TEXT

2. Comparisons with other methods on Epic-Kitchens. We bold the best results and underline the second best ones.

MY ALT TEXT

3. Comparisons with other methods on MECCANO. We bold the best results and underline the second best ones.

MY ALT TEXT

4. Comparisons with other methods on 100DOH. We bold the best results and underline the second best ones.

MY ALT TEXT