Skip to main content

Distribution and Depth-Aware Transformers for 3D Human Mesh Recovery

Published onMay 28, 2024
Distribution and Depth-Aware Transformers for 3D Human Mesh Recovery
·

ABSTRACT

Precise Human Mesh Recovery (HMR) with in-the-wild data is a formidable challenge and is often hindered by depth ambiguities and reduced precision. Existing works resort to either pose priors or multi-modal data such as multi-view or point cloud information, though their methods often overlook the valuable scene-depth information inherently present in a single image. Moreover, achieving robust HMR for out-of-distribution (OOD) data is exceedingly challenging due to inherent variations in pose, shape and depth. Consequently, understanding the underlying distribution becomes a vital subproblem in modeling human forms. Motivated by the need for unambiguous and robust human modeling, we introduce Distribution and depth-aware human mesh recovery (D2AHMR), an end-to-end transformer architecture meticulously designed to minimize the disparity between distributions and incorporate scene-depth leveraging prior depth information. Our approach demonstrates superior performance in handling OOD data in certain scenarios while consistently achieving competitive results against state-of-the-art HMR methods on controlled datasets.

Month: May

Year: 2024

Venue: 21st Conference on Robots and Vision

URL: https://crv.pubpub.org/pub/f9hwdv89


Comments
0
comment
No comments here
Why not start the discussion?