Home
Scholarly Works
CROMOSim: A Deep Learning-Based Cross-Modality...
Journal article

CROMOSim: A Deep Learning-Based Cross-Modality Inertial Measurement Simulator

Abstract

With the prevalence of wearable devices, inertial measurement unit (IMU) data has been utilized in monitoring and assessing human mobility such as human activity recognition (HAR) and human pose estimation (HPE). Training deep neural network (DNN) models for these tasks require a large amount of labelled data, which are hard to acquire in uncontrolled environments. To mitigate the data scarcity problem, we design CROMOSim, a cross-modality sensor simulator that simulates high fidelity virtual IMU sensor data from motion capture systems or monocular RGB cameras. It utilizes a skinned multi-person linear model (SMPL) for 3D body pose and shape representations to enable simulation from arbitrary on-body positions. Then a DNN model is trained to learn the functional mapping from imperfect trajectory estimations in a 3D SMPL body tri-mesh due to measurement noise, calibration errors, occlusion and other modelling artifacts, to IMU data. We evaluate the fidelity of CROMOSim simulated data and its utility in data augmentation on various HAR and HPE datasets. Extensive empirical results show that the proposed model achieves a 6.7% improvement over baseline methods in a HAR task.

Authors

Hao Y; Lou X; Wang B; Zheng R

Journal

IEEE Transactions on Mobile Computing, Vol. 23, No. 1, pp. 302–312

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

January 1, 2024

DOI

10.1109/tmc.2022.3230370

ISSN

1536-1233

Contact the Experts team