362 0

A Real-time Vision System Architecture for Autonomous Driving Systems based on Multi-sensor Information Fusion

Title
A Real-time Vision System Architecture for Autonomous Driving Systems based on Multi-sensor Information Fusion
Other Titles
자율주행시스템을 위한 다중센서 정보융합기반 실시간 영상처리 시스템 구조
Author
이민채
Alternative Author(s)
Minchae Lee
Advisor(s)
선우명호
Issue Date
2014-02
Publisher
한양대학교
Degree
Doctor
Abstract
Autonomous driving technologies which are the ultimate goal of intelligent vehicles have been increasingly researching to achieve maximum safety and convenience such as zero accidents and optimal traffic flow. This dissertation proposes a real-time vision system architecture for autonomous driving systems based on multi-sensor information fusion. In the developed vision system, in order to optimize real-time performance of vision system, entire algorithms from low-level image processing to object tracking and track management are designed by considering computation time while preserving detection rates. In the aspect of hierarchical structure of vision systems, the proposed architecture is developed with various hierarchical levels of algorithms. In image processing process, simplified edge detection and segmentation methods are proposed to detect low level features such as lanes, crosswalk patterns, and colors. In detection and recognition process, information fusion with laser radar sensors and GPS are applied for template matching and learning-based object detection methods to reduce computation time and false-positives. In tracking and track management process, nearest-neighbor filter and cascade particle filter are used to tracking and track management in accordance with application characteristics. From the view of the algorithm, the proposed vision system consists mainly of three approaches: probabilistic feature tracking, feature matching, and segmentation and grouping methods. In probabilistic feature tracking, a cascade particle filter (CPF) is proposed to improve the robustness and computation time of tracking systems. Due to the limitation of conventional particle filter, stable state estimation cannot be easily obtained while preserving a wide range of tracking coverage. The proposed CPF improved the structure of the conventional particle filter by cascading multiple particle filters and decomposing system models. The model decomposition scheme reduces the complexity and computation time of the particle filters. Finally, the computation time is decrease by about 46%. The proposed feature matching approach is a general object detection approach. The feature matching is proposed and applied for four types of detection and recognition systems: passenger, traffic light, traffic signal, and parking sign. In the study, template matching and AdaBoost based feature detection methods are applied for feature matching applications. In particular, the learning-based object detection method is developed for the passenger detection system. Moreover, because the conventional matching based algorithms require high computation power, the information fusion is applied to reduce computation time. As a result, the computation time is decreased by about 62% to about 70%. The proposed segmentation and grouping methods are applied for detection of crosswalks, barriers, and parking spaces. In this study, a graph search based method for repetitive pattern detection is proposed to reduce computation time and to improve detection performance. A pair searching and grouping algorithm is developed for vertical pattern detection. The pair searching sorts labeled objects and stores objects into a horizontally ordered list. The grouping algorithm produces a plurality of candidates through pattern searching of the sorted list. Through the experiments, the computation time is decreased by from about 36% to about 60% than conventional feature matching based object detection methods. Finally, the proposed real-time vision system architecture for autonomous driving systems were evaluated on various roads and environmental conditions with the autonomous vehicle. The proposed real-time vision system proved to be robust and fast enough to be applied to autonomous vehicles.
URI
https://repository.hanyang.ac.kr/handle/20.500.11754/130699http://hanyang.dcollection.net/common/orgView/200000423218
Appears in Collections:
GRADUATE SCHOOL[S](대학원) > DEPARTMENT OF AUTOMOTIVE ENGINEERING(자동차공학과) > Theses (Ph.D.)
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE