448 0

Robust Blind-Spot Detection Method Using Rearview Fisheye Camera Through Viewpoint Transformation

Title
Robust Blind-Spot Detection Method Using Rearview Fisheye Camera Through Viewpoint Transformation
Other Titles
후방 어안 카메라의 시점 변환을 통한 강인한 사각지대 차량 검출 방법
Author
이홍준
Alternative Author(s)
이홍준
Advisor(s)
김회율
Issue Date
2021. 2
Publisher
한양대학교
Degree
Doctor
Abstract
Many accidents occur while changing lanes due to blind-spots on the rear side of vehicles. To solve this problem, many systems for detecting vehicles in the blind-spot have been developed. Particularly, radar-based systems are widely commercialized. However, radar-based blind-spot detection (BSD) systems frequently result in false alarms because of their low detection performance and location measurement accuracy. Therefore, many camera-based systems are currently in development. In camera-based systems, there are two methods: using two cameras that directly capture the blind-spot, and using a rear fisheye camera for parking assistance. Using the rear camera is advantageous for commercialization. However, due to the lens distortion, the appearance of the vehicle in the image is quite distorted. This thesis proposes a method to detect vehicles in the blind-spot robustly by transforming the viewpoint of the rear fisheye camera into a virtual camera. The virtual camera is perpendicular to the rear fisheye camera, and the rear image is projected onto the image plane of the virtual camera. In the viewpoint transformed image (the so-called side-rectilinear image), the tire size and distance between the front and rear tires are constant, regardless of the location of the vehicles in the blind-spot. Hence, the proposed method detects the front and rear tires in the side-rectilinear image and detects the vehicles by a combination of detected tires. Additionally, vehicle detection is performed in the rear fisheye image so that a distant vehicle can be detected. A method of fusing the two types of detection results is also proposed. Furthermore, a generative adversarial network-based data augmentation framework is proposed to improve the nighttime performance of vehicle detectors based on the side-rectilinear images. The generative adversarial network is trained on the publicly available database. As a result, the proposed framework does not require additional data acquisition to improve nighttime performance. To evaluate the performance of the proposed method, the detection accuracy was evaluated on various images acquired from an actual vehicle. In addition, to compare the performance with the radar-based system, the performance was compared using the LiDAR data as the ground truth. As a result of the evaluation, the proposed detection method had a recall rate of 92% in the side rectilinear image and a vehicle detection rate of 79.7% in the rear image. Although the recall rate in the rear image was low, the performance of the BSD system was not affected because it only assists in the detection in the side-rectilinear image. As a result of the performance evaluation of the proposed BSD system, the location error of the vehicle in the blind-spot was verified to be significantly lower than that of the radar-based system. To verify the effect of the data augmentation framework for improving nighttime performance, an algorithm was applied to the nighttime image dataset. As a result, the recall rate was improved by 2 times when the data were augmented compared to the case of training with only the daytime image dataset.
URI
https://repository.hanyang.ac.kr/handle/20.500.11754/159159http://hanyang.dcollection.net/common/orgView/200000485705
Appears in Collections:
GRADUATE SCHOOL[S](대학원) > DEPARTMENT OF ELECTRONIC ENGINEERING(융합전자공학과) > Theses (Ph.D.)
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE