babyÖ±²¥app

Skip to main content

IEEE Robotics and Automation Letters (2017): SRAL: Shared Representative Appearance Learning for Long-Term Visual Place Recognition

Abstract: Place recognition, or loop closure detection, is an essential component to address the problem of visual simultaneous localization and mapping (SLAM). Long-term navigation of robots in outdoor environments introduces new challenges to enable life-long SLAM, including the strong appearance change resulting from vegetation, weather, and illumination variations across various times of the day, different days, months, or even seasons. In this paper, we propose a new shared representative appearance learning (SRAL) approach to address long-term visual place recognition. Different from previous methods using a single feature modality or a concatenation of multiple features, our SRAL method autonomously learns representative features that are shared in all scene scenarios, and then fuses the features together to represent the long-term appearance of environments observed by a robot during life-long navigation. By formulating SRAL as a regularized optimization problem, we use structured sparsity-inducing norms to model interrelationships of feature modalities. In addition, an optimization algorithm is developed to efficiently solve the formulated optimization problem, which holds a theoretical convergence guarantee. Extensive empirical study was performed to evaluate the SRAL method using large-scale benchmark datasets, including St Lucia, CMU-VL, and Nordland datasets. Experimental results have shown that our SRAL method obtains superior performance for life-long place recognition using individual images, outperforms previous single image-based methods, and is capable of estimating the importance of feature modalities.

Han, F., Yang, X., Deng, Y., Rentschler, M., Yang, D., Zhang, H., "SRAL: Shared Representative Appearance Learning for Long-Term Visual Place Recognition," IEEE Robotics and Automation Letters. 2(2): 1172-1179, 2017.

()