Skip to main content

Cross-Graph Domain Adaptation for Skeleton-based Human Action Recognition

Published onMay 28, 2024
Cross-Graph Domain Adaptation for Skeleton-based Human Action Recognition
·

ABSTRACT

Recent research on human action recognition is largely facilitated by skeletal data, a compact graph representation composed of key joints of the human skeleton that is efficiently extracted by body tracking systems and that offers the merit of being robust to environmental variations. However, the skeleton resolution and joint connectivity of the extracted skeletons may vary with sensor devices, which results in different skeleton graph representations on collected data. This paper investigates a cross skeleton graph domain adaptation approach where a skeleton action recognition model is trained upon a source skeletal data domain but is expected to adapt onto a target domain configured with a different skeleton graph. It proposes an adversarial learning framework where a generation space is developed on which the model learns valid skeletal action knowledge from the source graph domain.Interaction with an embedded discrimination space is employed to extract heterogenous graph features from the target domain. Optimization of the generation space and the discrimination space is realized alternatively under adversarial learning which guarantees action-aware and domain-agnostic skeletal knowledge, thus forming a joint human action recognition model effectively functioning on both graph domains. In experiments, the paper evaluates the proposed method by incorporating graph convolutional networks into two skeleton action recognition benchmarks, NTU-RGB+D and Northwestern-UCLA, where comparisons are conducted to demonstrate the effectiveness of the proposed approach. Code will be available at https://github.com/tht106/CrossGraphDA.

Month: May

Year: 2024

Venue: 21st Conference on Robots and Vision

URL: https://crv.pubpub.org/pub/pb2qi4uo


Comments
0
comment
No comments here
Why not start the discussion?