224 0

Vision-included Multi Modal Navigation Solutions for Indoor Mobile Robots

Vision-included Multi Modal Navigation Solutions for Indoor Mobile Robots
Alternative Author(s)
Issue Date
In this thesis, vision-included multi modal navigation solutions for indoor mobile robots are discussed while focusing on practical usage and considering robustness, speed, accuracy, and cost. Humans senses constitute the perfect navigation system that processes visual information. This system, however, does not provide an accurate global metric map, which is often the main result obtained from simultaneous localization and mapping (SLAM) systems. For instance, when a path is described, the instructions are not composed of geometrical information—e.g., walk forward 32.15 m, rotate 34.78 degrees clockwise, and walk forward for 45.8 m. This exact metric map, however, is not essential for following the correct path. As a result, we designed a navigation system that includes an egocentric topological map representation based on vision and implemented it in various robots and environments. First, we present a scene-guided navigation system that uses a place recognition technique and describe the functionalities of the vision system working only with the egocentric topological map. This system can identify similar places by using a vision-based place recognition technique, which employs line descriptor due to its high observability in indoor environments. Afterward, the path-matching model and the ground line-matching model are proposed to improve the accuracy of the recognition algorithm, which does not estimate the geometrical feedback. In the path-matching model, the path is inferred by simulating it on a local grid map while in the ground line-matching model, the motion between similar and observation scenes can be estimated, assuming a constant pose between ground and camera in an indoor space. Afterward, the confidence random tree-based path planning algorithm, which considers the trade-off between probabilistic completeness and real-time performance (a primary issue in sampling-based path planners), is presented. When a solution path is composed of a narrow entrance, the sampling-based path planners should adapt it through generating dense samples. This technique, however, requires high computing costs. In contrast, in open space scenarios, the dense samples are computing resource dissipation. In the absence of a solution path (worst case), the path planner generates many samples, resulting in a high computational cost. The proposed planner, which is particularly useful in broad space scenarios, can indicate the existence of the solution path, therefore stopping the sampling process. Afterward, the safety and length of the path are considered in which might share the work space with humans
this is particularly useful for commercial applications. We demonstrate a vision-included sensor fusion navigation system consisting of a camera, a laser scanner, and an IMU. The vision-based egocentric topological map is used for the navigation, avoiding obstacles while maintaining real time performance. The software architecture is designed to be easily adaptable to various robot and sensor configurations, as the data gathering and robot motion generating modules have a standard data format for convenience. In contrast, the core modules, such as Bayesian localization, comprise efficient computing connections. A failure situation, such as wheel slip or sensor disconnection, can be detected, and a stand-alone robot navigation solution can be implemented for non-expert users. This system was implemented and tested on several types of mobile robots and sensors in both industrial and commercial applications.
Appears in Collections:
Files in This Item:
There are no files associated with this item.
RIS (EndNote)
XLS (Excel)


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.