Home
Scholarly Works
3-D Dynamic Multitarget Detection Algorithm Based...
Journal article

3-D Dynamic Multitarget Detection Algorithm Based on Cross-View Feature Fusion

Abstract

In autonomous driving, data degradation and insufficient feature-richness in the current single-modal algorithms cannot effectively perform dynamic multitarget detection. Therefore, a 3-D dynamic multitarget detection algorithm based on cross-view feature fusion is proposed. A two-stage parallel fusion framework is proposed, which simultaneously extracts point cloud and image features in the first stage. Additionally, a Lidar-Camera feature mapping module is designed to achieve pointwised correspondence between different data. Then, a feature weighted fusion module is designed to judge the weight of each point in the point cloud feature and image feature. In the second stage, a keypoint-based feature extraction module is designed to enrich the features, which integrates the multiscale features and image features in the first stage to improve the detection accuracy. The proposed algorithm was compared with other state-of-the-art (SOTA) methods on the Kitti, Waymo, and Nuscene datasets. The result showed that the accuracy of vehicle target has reached to 93.03%. The module ablation study and accuracy detection on self-made dataset showed that the proposed algorithm not only had good robustness, strong portability, and generalization ability but also had high detection accuracy.

Authors

Zhou F; Tao C; Gao Z; Zhang Z; Zheng S; Zhu Y

Journal

IEEE Transactions on Artificial Intelligence, Vol. 5, No. 6, pp. 3146–3159

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

June 1, 2024

DOI

10.1109/tai.2023.3342104

ISSN

2691-4581

Contact the Experts team