Hidden Gems: 4D Radar Scene Flow Learning Using Cross-Modal Supervision
This is the official repository of the CMFlow, a cross-modal supervised approach for estimating 4D radar scene flow. For technical details, please refer to our paper on CVPR 2023:
Hidden Gems: 4D Radar Scene Flow Learning Using Cross-Modal Supervision
Fangqiang Ding, Andras Palffy, Dariu M. Gavrila, Chris Xiaoxuan Lu
[arXiv] [demo] [page] [supp]
- [2023-02] Our paper is accepted by CVPR 2023.
- [2023-03] Our paper can be found on arXiv. Supplementary materials can be found here. Project page is built here.
If you find our work useful in your research, please consider citing:
@inproceedings{ding2023hidden,
title={Hidden Gems: 4D Radar Scene Flow Learning Using Cross-Modal Supervision},
author={Ding, Fangqiang and Palffy, Andras and Gavrila, Dariu M. and Lu, Chris Xiaoxuan},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
pages={1-10},
year={2023}
}
This work proposes a novel approach to 4D radar-based scene flow estimation via cross-modal learning. Our approach is motivated by the co-located sensing redundancy in modern autonomous vehicles. Such redundancy implicitly provides various forms of supervision cues to the radar scene flow estimation. Specifically, we introduce a multi-task model architecture for the identified cross-modal learning problem and propose loss functions to opportunistically engage scene flow estimation using multiple cross-modal constraints for effective model training. Extensive experiments show the state-of-the-art performance of our method and demonstrate the effectiveness of cross-modal supervised learning to infer more accurate 4D radar scene flow. We also show its usefulness to two subtasks - motion segmentation and ego-motion estimation.
Here are some GIFs to show our qualitative results on scene flow estimation and two subtasks, motion segmentation and ego-motion estimation. For more qualitative results, please refer to our demo video or supplementary.
Our codes and models will be released by March 15th. Instructions on how to preprocess the data, train our models and run the test will be provided in GETTING_STARTED.