357 0

Study of ADAS Sensor Modeling and Performance considering Rainfall Disturbance

Title
Study of ADAS Sensor Modeling and Performance considering Rainfall Disturbance
Other Titles
강우 외란을 고려한 ADAS 센서 모델링 및 성능 연구
Author
변미정
Alternative Author(s)
변미정
Advisor(s)
윤상원
Issue Date
2020-02
Publisher
한양대학교
Degree
Master
Abstract
Research and development of self-driving cars is actively underway around the world. Advanced Driver Assist System (ADAS) is a technology that is applied to self-driving cars, and one of the core technologies of ADAS is sensors. In order to develop a perfect self-driving car, the development of sensors that accurately recognize the driving situation and the external environment of the vehicle must precede. Typical ADAS sensors include cameras, Radars (Radio Detection and Ranging) and LiDAR (Light Detection and Ranging). The camera is a sensor that can obtain external image information when driving the vehicle and is divided into two types, mono and stereo, depending on the number of lens use. It is divided into mono-type using one lens and stereo method using two lenses. These days, stereo types are a trend, because you can get 3D stereoscopic images using two lenses. Therefore, cameras are an essential part of implementing ADAS functions, such as Autonomous Emergency Braking (AEB) functions, because they are useful for recognizing pedestrians using perspective. In addition, recognition of roadway lanes, markers, and guide marks can be used to recognize the location of not only one's vehicle but also another's. However, they are vulnerable to changes in the environment and have the disadvantage of requiring a lot of computing resources because of their high compute capacity and complexity [1, 2]. Radar is a highly commercialized sensor in current vehicles that uses electromagnetic waves to receive waves that are reflected back from an object and measures the distance from an object. Unlike cameras, it has the advantage of being strong in environmental conditions and having long detection distances. However, the ability to distinguish between different objects is low when they are within detection range. Therefore, it is a suitable sensor for detecting long-range obstacles rather than object discrimination. Finally, LiDAR is a sensor that uses a laser pulse to detect the distance from the vehicle around the host vehicle or to distinguish objects. Generally, LiDAR is used to generate 3D precision maps by fusing with GPS, a satellite navigation system, and IMU, an inertial sensor. Also, like a camera, information obtained through LiDAR can be stored as map data, so that the location of the host vehicle can be recognized by comparing the information received in real time with the stored map information when self-driving [3]. However, while there are advantages that can be implemented in 3D as shown in Figure 1-1, it is difficult to distinguish the color of an object and it has performance that is vulnerable to the external environment. Since it has different principles and characteristics, the use of only one sensor cannot guarantee perfect cognitive performance in autonomous driving. Therefore, self-driving cars are being used by converging sensors, such as using radar and camera fusion. In terms of performance according to external environment, as shown in Figure 1-2, each ADAS recognition sensor shows a different performance difference even under environmental conditions such as rain, snow, and fog [4]. Radar has strong performance in external environmental conditions such as rain and snow, but cameras and LiDAR are vulnerable to extreme external environments. In particular, in the case of LiDARs, as precipitation increases, the amount of scattering produced by raindrops increases, and thus the performance becomes dramatically weakened. Therefore, various researches are under way, such as using 1550nm waves instead of 905nm waves or using other sensors such as radars. In this work, the rain characteristics are determined using rain data measured at three different locations. The rain data were derived from previous reports. The regional characteristics provide raindrop sizes, numbers, and shapes by the function of rain-rates. The regional rain distributions are defined by constrained gamma models, and the rain-induced attenuation is expressed using Mie scattering theory. They are converted to the extinction coefficient, which is incorporated in a custom-built LiDAR model, and LiDAR performances by regions are compared. In addition, the LiDAR outputs are fed to a vehicle driving simulator and their impact on the system is quantitatively analyzed.
URI
https://repository.hanyang.ac.kr/handle/20.500.11754/123386http://hanyang.dcollection.net/common/orgView/200000437074
Appears in Collections:
GRADUATE SCHOOL[S](대학원) > AUTOMOTIVE ENGINEERING(미래자동차공학과) > Theses (Master)
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE