Semantic Mapping with Omnidirectional Vision
Por:
Posada, Luis Felipe, Velasquez-Lopez, Alejandro, Hoffmann, Frank, Bertram, Torsten
Publicada:
1 ene 2018
Resumen:
This paper presents a purely visual semantic mapping framework using omnidirectional images. The approach rests upon the robust segmentation of the robot's local free space, replacing conventional range sensors for the generation of occupancy grid maps. The perceptions are mapped into a bird's eye view allowing an inverse sensor model directly by removing the non-linear distortions of the omnidirectional camera mirror. The system relies on a place category classifier to label the navigation relevant categories: room, corridor, doorway, and open room. Each place class maintains a separated grid map that are fused with the range-based occupancy grid for building a dense semantic map.
Filiaciones:
Posada, Luis Felipe:
Univ EAFIT, Design Engn Res Grp GRID, Medellin, Colombia
Velasquez-Lopez, Alejandro:
Univ EAFIT, Design Engn Res Grp GRID, Medellin, Colombia
Hoffmann, Frank:
Tech Univ Dortmund, Inst Control Theory & Syst Engn, D-44227 Dortmund, Germany
Bertram, Torsten:
Tech Univ Dortmund, Inst Control Theory & Syst Engn, D-44227 Dortmund, Germany
|