• search hit 4 of 22
Back to Result List

Leveraging anthropometric measurements to improve human mesh estimation and ensure consistent body shapes

  • The basic body shape (i.e., the body shape in T-pose) of a person does not change within a single video. However, most SOTA human mesh estimation (HME) models output a slightly different, thus inconsistent basic body shape for each video frame. Furthermore, we find that SOTA 3D human pose estimation (HPE) models outperform HME models regarding the precision of the estimated 3D keypoint positions. We solve the problem of inconsistent body shapes by leveraging anthropometric measurements like taken by tailors from humans. We create a model called A2B that converts given anthropometric measurements to basic body shape parameters of human mesh models. We obtain superior and consistent human meshes by combining the A2B model results with the keypoints of 3D HPE models using inverse kinematics. We evaluate our approach on challenging datasets like ASPset or fit3D, where we can lower the MPJPE by over 30 mm compared to SOTA HME models. Further, replacing estimates of the body shape parametersThe basic body shape (i.e., the body shape in T-pose) of a person does not change within a single video. However, most SOTA human mesh estimation (HME) models output a slightly different, thus inconsistent basic body shape for each video frame. Furthermore, we find that SOTA 3D human pose estimation (HPE) models outperform HME models regarding the precision of the estimated 3D keypoint positions. We solve the problem of inconsistent body shapes by leveraging anthropometric measurements like taken by tailors from humans. We create a model called A2B that converts given anthropometric measurements to basic body shape parameters of human mesh models. We obtain superior and consistent human meshes by combining the A2B model results with the keypoints of 3D HPE models using inverse kinematics. We evaluate our approach on challenging datasets like ASPset or fit3D, where we can lower the MPJPE by over 30 mm compared to SOTA HME models. Further, replacing estimates of the body shape parameters from existing HME models with A2B results not only increases the performance of these HME models, but also guarantees consistent body shapes.show moreshow less

Download full text files

  • 121474.pdfeng
    (1196KB)

    Postprint. © 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Export metadata

Statistics

Number of document requests

Additional Services

Share in Twitter Search Google Scholar
Metadaten
Author:Katja LudwigORCiDGND, Julian LorenzORCiDGND, Daniel KienzleORCiDGND, Tuan Bui, Rainer LienhartORCiDGND
URN:urn:nbn:de:bvb:384-opus4-1214746
Frontdoor URLhttps://opus.bibliothek.uni-augsburg.de/opus4/121474
ISBN:979-8-3315-9994-2OPAC
ISSN:2160-7516OPAC
Parent Title (English):2025 IEEE/CVF International Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 11-12 June 2025, Nashville, TN, USA
Publisher:IEEE
Place of publication:Piscataway, NJ
Type:Conference Proceeding
Language:English
Date of Publication (online):2025/04/17
Year of first Publication:2025
Publishing Institution:Universität Augsburg
Release Date:2025/04/18
First Page:5862
Last Page:5871
DOI:https://doi.org/10.1109/CVPRW67362.2025.00585
Institutes:Fakultät für Angewandte Informatik
Fakultät für Angewandte Informatik / Institut für Informatik
Fakultät für Angewandte Informatik / Institut für Informatik / Lehrstuhl für Maschinelles Lernen und Maschinelles Sehen
Dewey Decimal Classification:0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 004 Datenverarbeitung; Informatik
Licence (German):Deutsches Urheberrecht