Vehicle Detection and Localization using 3D LIDAR Point Cloud and Image Semantic Segmentation
- Barea, Rafael; Pérez, C.; Bergasa, Luis M.; López, Elena; Romera, E.; Molinos, Eduardo; Ocaña, Manuel; López, J.
- Research areas:
- Year: 2018
- Type of Publication: In Proceedings
- Keywords: feedforward neural nets; image colour analysis; image fusion; image segmentation; object detection; object recognition; optical radar; radar imaging; stereo image processing; KITTI object detection benchmark; CNN; autonomous driving; vehicle detection; image semantic
- Book title: 2018 21st International Conference on Intelligent Transportation Systems (ITSC)
- Pages: 3481-3486
- Month: November
- ISSN: 2153-0017
- DOI: 10.1109/ITSC.2018.8569962
- Abstract:
- This paper presents a real-time approach to detect and localize surrounding vehicles in urban driving scenes. We propose a multimodal fusion framework that processes both 3D LIDAR point cloud and RGB image to obtain robust vehicle position and size in a Bird's Eye View (BEV). Semantic segmentation from RGB images is obtained using our efficient Convolutional Neural Network (CNN) architecture called ERFNet. Our proposal takes advantage of accurate depth information provided by LIDAR and detailed semantic information processed from a camera. The method has been tested using the KITTI object detection benchmark. Experiments show that our approach outperforms or is on par with other state-of-the-art proposals but our CNN was trained in another dataset, showing a good generalization capability to any domain, a key point for autonomous driving.
Hits: 99521