Uplift and upsample: efficient 3D human pose estimation with uplifting transformers

  • The state-of-the-art for monocular 3D human pose esti- mation in videos is dominated by the paradigm of 2D-to- 3D pose uplifting. While the uplifting methods themselves are rather efficient, the true computational complexity de- pends on the per-frame 2D pose estimation. In this paper, we present a Transformer-based pose uplifting scheme that can operate on temporally sparse 2D pose sequences but still produce temporally dense 3D pose estimates. We show how masked token modeling can be utilized for temporal upsampling within Transformer blocks. This allows to de- couple the sampling rate of input 2D poses and the target frame rate of the video and drastically decreases the total computational complexity. Additionally, we explore the op- tion of pre-training on large motion capture archives, which has been largely neglected so far. We evaluate our method on two popular benchmark datasets: Human3.6M and MPI- INF-3DHP. With an MPJPE of 45.0 mm and 46.9 mm, re- spectively, our proposedThe state-of-the-art for monocular 3D human pose esti- mation in videos is dominated by the paradigm of 2D-to- 3D pose uplifting. While the uplifting methods themselves are rather efficient, the true computational complexity de- pends on the per-frame 2D pose estimation. In this paper, we present a Transformer-based pose uplifting scheme that can operate on temporally sparse 2D pose sequences but still produce temporally dense 3D pose estimates. We show how masked token modeling can be utilized for temporal upsampling within Transformer blocks. This allows to de- couple the sampling rate of input 2D poses and the target frame rate of the video and drastically decreases the total computational complexity. Additionally, we explore the op- tion of pre-training on large motion capture archives, which has been largely neglected so far. We evaluate our method on two popular benchmark datasets: Human3.6M and MPI- INF-3DHP. With an MPJPE of 45.0 mm and 46.9 mm, re- spectively, our proposed method can compete with the state- of-the-art while reducing inference time by a factor of 12. This enables real-time throughput with variable consumer hardware in stationary and mobile applications. We re- lease our code and models at https://github.com/ goldbricklemon/uplift-upsample-3dhpeshow moreshow less

Download full text files

  • 98921.pdfeng
    (2602KB)

    Postprint. © 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Export metadata

Statistics

Number of document requests

Additional Services

Share in Twitter Search Google Scholar
Metadaten
Author:Moritz EinfaltGND, Katja LudwigGND, Rainer LienhartGND
URN:urn:nbn:de:bvb:384-opus4-989219
Frontdoor URLhttps://opus.bibliothek.uni-augsburg.de/opus4/98921
ISBN:978-1-6654-9346-8OPAC
Parent Title (English):IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 2-7 January 2023
Publisher:IEEE
Place of publication:Piscataway, NJ
Editor:Tamara Berg, Ryan Farrell, Eric Mortensen
Type:Conference Proceeding
Language:English
Year of first Publication:2023
Publishing Institution:Universität Augsburg
Release Date:2022/10/26
First Page:2902
Last Page:2912
DOI:https://doi.org/10.1109/WACV56688.2023.00292
Institutes:Fakultät für Angewandte Informatik
Fakultät für Angewandte Informatik / Institut für Informatik
Fakultät für Angewandte Informatik / Institut für Informatik / Lehrstuhl für Maschinelles Lernen und Maschinelles Sehen
Dewey Decimal Classification:0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 004 Datenverarbeitung; Informatik
Licence (German):Deutsches Urheberrecht