SUPER: Seated Upper Body Pose Estimation using mmWave Radars
Abstract
In industrial countries, adults spend a considerable amount of time sedentary
each day at work, driving and during activities of daily living. Characterizing
the seated upper body human poses using mmWave radars is an important, yet
under-studied topic with many applications in human-machine interaction,
transportation and road safety. In this work, we devise SUPER, a framework for
seated upper body human pose estimation that utilizes dual-mmWave radars in
close proximity. A novel masking algorithm is proposed to coherently fuse data
from the radars to generate intensity and Doppler point clouds with
complementary information for high-motion but small radar cross section areas
(e.g., upper extremities) and low-motion but large RCS areas (e.g. torso). A
lightweight neural network extracts both global and local features of upper
body and output pose parameters for the Skinned Multi-Person Linear (SMPL)
model. Extensive leave-one-subject-out experiments on various motion sequences
from multiple subjects show that SUPER outperforms a state-of-the-art baseline
method by 30 -- 184%. We also demonstrate its utility in a simple downstream
task for hand-object interaction.