BoPR: Body-aware Part Regressor for Human Shape and Pose Estimation

BoPR is a body reconstruction method that takes RGB monocular images as input and uses body reference features as conditions to avoid depth ambiguity of parts.

Abstract

This paper presents a novel approach for estimating human body shape and pose from monocular images that effectively addresses the challenges of occlusions and depth ambiguity. Our proposed method BoPR,the Body-aware Part Regressor, first extracts features of both the body and part regions using an attention-guided mechanism. We then utilize these features to encode extra part-body dependency for per-part regression, with part features as queries and body feature as a reference. This allows our network to infer the spatial relationship of occluded parts with the body by leveraging visible parts and body reference information. Our method outperforms existing state-of-the-art methods on two benchmark datasets, and our experiments show that it significantly surpasses existing methods in terms of depth ambiguity and occlusion handling. These results provide strong evidence of the effectiveness of our approach.

Additional Results

Method Overview

Given an input image, our method first extracts body and part features based on a soft attention mechanism. Then each part feature is concatenated with the body feature as an input token to the transformer to encode body-aware part features for camera prediction and SMPL parameter regression.

Results

Technical Paper



Yongkang Cheng, Shaoli Huang, Jifeng Ning, Ying Shan
BoPR: Body-aware Part Regressor for Human Shape and Pose Estimation