Abstract
With the availability of many datasets tailored for autonomous driving in real-world urban scenes, semantic segmentation for urban driving scenes achieves significant progress. However, semantic segmentation for off-road, unstructured environments is not widely studied. Directly applying existing segmentation networks often results in performance degradation as they cannot overcome intrinsic problems in such environments, such as illumination changes. In this paper, a built-in memory module for semantic segmentation is proposed to overcome these problems. The memory module stores significant representations of training images as memory items. In addition to the encoder embedding like items together, the proposed memory module is specifically designed to cluster together instances of the same class even when there are significant variances in embedded features. Therefore, it makes segmentation networks better deal with unexpected illumination changes. A triplet loss is used in training to minimize redundancy in storing discriminative representations of the memory module. The proposed memory module is general so that it can be adopted in a variety of networks. We conduct experiments on the Robot Unstructured Ground Driving (RUGD) dataset and RELLIS dataset, which are collected from off-road, unstructured natural environments. Experimental results show that the proposed memory module improves the performance of existing segmentation networks and contributes to capturing unclear objects over various off-road, unstructured natural scenes with equivalent computational cost and network parameters. As the proposed method can be integrated into compact networks, it presents a viable approach for resource-limited small autonomous platforms.
Original language | English |
---|---|
Title of host publication | IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2021 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 24-31 |
Number of pages | 8 |
ISBN (Electronic) | 9781665417143 |
DOIs | |
Publication status | Published - 2021 |
Event | 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2021 - Prague, Czech Republic Duration: 2021 Sept 27 → 2021 Oct 1 |
Publication series
Name | IEEE International Conference on Intelligent Robots and Systems |
---|---|
ISSN (Print) | 2153-0858 |
ISSN (Electronic) | 2153-0866 |
Conference
Conference | 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2021 |
---|---|
Country/Territory | Czech Republic |
City | Prague |
Period | 21/9/27 → 21/10/1 |
Bibliographical note
Funding Information:*This work was supported by Air Force Office of Scientific Research under award number FA2386-19-1-4001. 1Youngsaeng Jin and Hanseok Ko are with the School of Electrical Engineering, Korea University, Seoul 136-713, South Korea [email protected]; [email protected] 2David K. Han is with Electrical and Computer Engineering, Drexel University, Philadelphia, PA 19104 [email protected] Fig. 1. Image samples from RUGD dataset [19], collected from off-road, unstructured environments. The numbers below each image are the average pixel intensity of categories ‘sky’ and ‘tree’. Images in this dataset cover a wide range of scenes and their illumination is inconsistent.
Publisher Copyright:
© 2021 IEEE.
ASJC Scopus subject areas
- Control and Systems Engineering
- Software
- Computer Vision and Pattern Recognition
- Computer Science Applications