Home
Scholarly Works
MirrorCalib: Utilizing Human Pose Information for...
Conference

MirrorCalib: Utilizing Human Pose Information for Mirror-based Virtual Camera Calibration

Abstract

In this paper, we present the novel task of estimating the extrinsic parameters of a virtual camera relative to a real camera in exercise videos with a mirror. This task poses a significant challenge in scenarios where the views from the real and mirrored cameras have no overlap or share salient features. To address this issue, prior knowledge of a human body and 2D joint locations are utilized to estimate the camera extrinsic parameters when a person is in front of a mirror. We devise a modified eight-point algorithm to obtain an initial estimation from 2D joint locations. The $2 D$ joint locations are then refined subject to human body constraints. Finally, a RANSAC algorithm is employed to remove outliers by comparing their epipolar distances to a predetermined threshold. MirrorCalib achieves a rotation error of 1.82° and a translation error of 69.51 mm on a collected real-world dataset, which out-performs the state-of-art method.

Authors

Liao L; Zheng R; Mitchell A

Volume

00

Pagination

pp. 1-7

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

July 16, 2024

DOI

10.1109/avss61716.2024.10672615

Name of conference

2024 IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)
View published work (Non-McMaster Users)

Contact the Experts team