262 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author정기석-
dc.date.accessioned2021-04-09T05:32:30Z-
dc.date.available2021-04-09T05:32:30Z-
dc.date.issued2020-02-
dc.identifier.citation2019 18th IEEE International Conference on Machine Learning and Applications (ICMLA), Page. 8-13en_US
dc.identifier.isbn978-1-7281-4550-1-
dc.identifier.urihttps://ieeexplore.ieee.org/document/8999032-
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/161309-
dc.description.abstractIn this paper, a new learning method to quantify data uncertainty without suffering from performance degradation in Single Image Super Resolution (SISR) is proposed. Our work is motivated by the fact that the idea of loss design for capturing uncertainty and that for solving SISR are contradictory. As to capturing data uncertainty, we often model the output of a network as a Euclidian distance divided by a predictive variance, negative log-likelihood (NLL) for the Gaussian distribution, so that images with high variance have less impact on training. On the other hand, in the SISR domain, recent works give more weights to the loss of challenging images to improve the performance by using attention models. Nonetheless, the conflict should be handled to make neural networks capable of predicting the uncertainty of a super-resolved image, without suffering from performance degradation. Therefore, we propose a method called Gradient Rescaling Attention Model (GRAM) that combines both attempts effectively. Since variance may reflect the difficulty of an image, we rescale the gradient of NLL by the degree of variance. Hence, the neural network can focus on the challenging images, similarly to attention models. We conduct performance evaluation using standard SISR benchmarks in terms of peak signal-noise ratio (PSNR) and structural similarity (SSIM). The experimental results show that the proposed gradient rescaling method generates negligible performance degradation compared to SISR outputs with the Euclidian loss, whereas NLL without attention degrades the SR quality.en_US
dc.description.sponsorshipThis research was funded and conducted under the Competency Development Program for Industry Specialists of the Korean Ministry of Trade, Industry and Energy (MOTIE), operated by Korea Institute for Advancement of Technology (KIAT). (No. N0001883, HRD program for Intelligent semiconductor Industry)en_US
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.subjectImage restorationen_US
dc.subjectNeural networksen_US
dc.subjectMachine learningen_US
dc.titleGRAM: Gradient Rescaling Attention Model for Data Uncertainty Estimation in Single Image Super Resolutionen_US
dc.typeArticleen_US
dc.identifier.doi10.1109/ICMLA.2019.00011-
dc.relation.page8-13-
dc.contributor.googleauthorLee, Changwoo-
dc.contributor.googleauthorChung, Ki-Seok-
dc.sector.campusS-
dc.sector.daehakCOLLEGE OF ENGINEERING[S]-
dc.sector.departmentDEPARTMENT OF ELECTRONIC ENGINEERING-
dc.identifier.pidkchung-
Appears in Collections:
COLLEGE OF ENGINEERING[S](공과대학) > ELECTRONIC ENGINEERING(융합전자공학부) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE