39 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author안용한-
dc.date.accessioned2024-04-22T05:01:12Z-
dc.date.available2024-04-22T05:01:12Z-
dc.date.issued2023-04-
dc.identifier.citationComputers, Materials and Continua, v. 75, NO 3, Page. 4753-4766en_US
dc.identifier.issn1546-2218en_US
dc.identifier.issn1546-2226en_US
dc.identifier.urihttps://information.hanyang.ac.kr/#/eds/detail?an=000992762700010&dbId=edswscen_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/189919-
dc.description.abstractRecently, convolutional neural network (CNN)-based visual inspection has been developed to detect defects on building surfaces automatically. The CNN model demonstrates remarkable accuracy in image data analysis; however, the predicted results have uncertainty in providing accurate information to users because of the “black box” problem in the deep learning model. Therefore, this study proposes a visual explanation method to overcome the uncertainty limitation of CNN-based defect identification. The visual representative gradient-weights class activation mapping (Grad-CAM) method is adopted to provide visually explainable information. A visualizing evaluation index is proposed to quantitatively analyze visual representations; this index reflects a rough estimate of the concordance rate between the visualized heat map and intended defects. In addition, an ablation study, adopting three-branch combinations with the VGG16, is implemented to identify performance variations by visualizing predicted results. Experiments reveal that the proposed model, combined with hybrid pooling, batch normalization, and multi-attention modules, achieves the best performance with an accuracy of 97.77%, corresponding to an improvement of 2.49% compared with the baseline model. Consequently, this study demonstrates that reliable results from an automatic defect classification model can be provided to an inspector through the visual representation of the predicted results using CNN models.en_US
dc.description.sponsorshipThis work was supported by a Korea Agency for Infrastructure Technology Advancement (KAIA) grant funded by the Ministry of Land, Infrastructure, and Transport (Grant 22CTAP-C163951-02).en_US
dc.languageen_USen_US
dc.publisherTech Science Pressen_US
dc.relation.ispartofseriesv. 75, NO 3;4753-4766-
dc.subjectDefect detectionen_US
dc.subjectvisualizationen_US
dc.subjectclass activation mapen_US
dc.subjectdeep learningen_US
dc.subjectexplanationen_US
dc.subjectvisualizing evaluation indexen_US
dc.titleVisualization for Explanation of Deep Learning-Based Defect Detection Model Using Class Activation Mapen_US
dc.typeArticleen_US
dc.relation.no3-
dc.relation.volume75-
dc.identifier.doi10.32604/cmc.2023.038362en_US
dc.relation.page4753-4766-
dc.relation.journalComputers, Materials and Continua-
dc.contributor.googleauthorShin, Hyunkyu-
dc.contributor.googleauthorAhn, Yonghan-
dc.contributor.googleauthorSong, Mihwa-
dc.contributor.googleauthorGil, Heungbae-
dc.contributor.googleauthorChoi, Jungsik-
dc.contributor.googleauthorLee, Sanghyo-
dc.relation.code2023005559-
dc.sector.campusE-
dc.sector.daehakCOLLEGE OF ENGINEERING SCIENCES[E]-
dc.sector.departmentSCHOOL OF ARCHITECTURE-
dc.identifier.pidyhahn-
Appears in Collections:
COLLEGE OF ENGINEERING SCIENCES[E](공학대학) > ARCHITECTURE(건축학부) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE