93 125

Full metadata record

DC FieldValueLanguage
dc.contributor.author고현석-
dc.date.accessioned2023-12-22T01:48:29Z-
dc.date.available2023-12-22T01:48:29Z-
dc.date.issued2023-08-
dc.identifier.citationJournal of Marine Science and Engineering, v. 11, NO. 8, article no. 1584, Page. 1.0-21.0-
dc.identifier.issn2077-1312;2077-1312-
dc.identifier.urihttps://www.mdpi.com/2077-1312/11/8/1584en_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/187829-
dc.description.abstractIn submarine warfare systems, passive SONAR is commonly used to detect enemy targets while concealing one’s own submarine. The bearing information of a target obtained from passive SONAR can be accumulated over time and visually represented as a two-dimensional image known as a BTR image. Accurate measurement of bearing–time information is crucial in obtaining precise information on enemy targets. However, due to various underwater environmental noises, signal reception rates are low, which makes it challenging to detect the directional angle of enemy targets from noisy BTR images. In this paper, we propose a deep-learning-based segmentation network for BTR images to improve the accuracy of enemy detection in underwater environments. Specifically, we utilized the spatial convolutional layer to effectively extract target objects. Additionally, we propose novel loss functions for network training to resolve a strong class imbalance problem observed in BTR images. In addition, due to the difficulty of obtaining actual target bearing data as military information, we created a synthesized BTR dataset that simulates various underwater scenarios. We conducted comprehensive experiments and related discussions using our synthesized BTR dataset, which demonstrate that the proposed network provides superior target segmentation performance compared to state-of-the-art methods. © 2023 by the authors.-
dc.description.sponsorshipThis work was supported by a Korean Research Institute for defense Technology planning and advancement (KRIT) grant funded by the Korean Government Defense Acquisition Program Administration (No. KRIT-CT-22-023-01, Deep Learning Technology for Detection & amp; Target Tracking in low SNR, 2023).-
dc.languageen-
dc.publisherMultidisciplinary Digital Publishing Institute (MDPI)-
dc.subjectbearing–time record image-
dc.subjectclass imbalance-
dc.subjectdeep-learning-based image segmentation-
dc.subjectnetwork training loss function-
dc.subjectpassive SONAR-
dc.titleTarget Tracking from Weak Acoustic Signals in an Underwater Environment Using a Deep Segmentation Network-
dc.typeArticle-
dc.relation.no8-
dc.relation.volume11-
dc.identifier.doi10.3390/jmse11081584-
dc.relation.page1.0-21.0-
dc.relation.journalJournal of Marine Science and Engineering-
dc.contributor.googleauthorShin, Won-
dc.contributor.googleauthorKim, Da-Sol-
dc.contributor.googleauthorKo, Hyunsuk-
dc.sector.campusE-
dc.sector.daehak공학대학-
dc.sector.department전자공학부-
dc.identifier.pidhyunsuk-
dc.identifier.article1584-


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE