Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 안용한 | - |
dc.date.accessioned | 2024-04-22T05:01:12Z | - |
dc.date.available | 2024-04-22T05:01:12Z | - |
dc.date.issued | 2023-04 | - |
dc.identifier.citation | Computers, Materials and Continua, v. 75, NO 3, Page. 4753-4766 | en_US |
dc.identifier.issn | 1546-2218 | en_US |
dc.identifier.issn | 1546-2226 | en_US |
dc.identifier.uri | https://information.hanyang.ac.kr/#/eds/detail?an=000992762700010&dbId=edswsc | en_US |
dc.identifier.uri | https://repository.hanyang.ac.kr/handle/20.500.11754/189919 | - |
dc.description.abstract | Recently, convolutional neural network (CNN)-based visual inspection has been developed to detect defects on building surfaces automatically. The CNN model demonstrates remarkable accuracy in image data analysis; however, the predicted results have uncertainty in providing accurate information to users because of the “black box” problem in the deep learning model. Therefore, this study proposes a visual explanation method to overcome the uncertainty limitation of CNN-based defect identification. The visual representative gradient-weights class activation mapping (Grad-CAM) method is adopted to provide visually explainable information. A visualizing evaluation index is proposed to quantitatively analyze visual representations; this index reflects a rough estimate of the concordance rate between the visualized heat map and intended defects. In addition, an ablation study, adopting three-branch combinations with the VGG16, is implemented to identify performance variations by visualizing predicted results. Experiments reveal that the proposed model, combined with hybrid pooling, batch normalization, and multi-attention modules, achieves the best performance with an accuracy of 97.77%, corresponding to an improvement of 2.49% compared with the baseline model. Consequently, this study demonstrates that reliable results from an automatic defect classification model can be provided to an inspector through the visual representation of the predicted results using CNN models. | en_US |
dc.description.sponsorship | This work was supported by a Korea Agency for Infrastructure Technology Advancement (KAIA) grant funded by the Ministry of Land, Infrastructure, and Transport (Grant 22CTAP-C163951-02). | en_US |
dc.language | en_US | en_US |
dc.publisher | Tech Science Press | en_US |
dc.relation.ispartofseries | v. 75, NO 3;4753-4766 | - |
dc.subject | Defect detection | en_US |
dc.subject | visualization | en_US |
dc.subject | class activation map | en_US |
dc.subject | deep learning | en_US |
dc.subject | explanation | en_US |
dc.subject | visualizing evaluation index | en_US |
dc.title | Visualization for Explanation of Deep Learning-Based Defect Detection Model Using Class Activation Map | en_US |
dc.type | Article | en_US |
dc.relation.no | 3 | - |
dc.relation.volume | 75 | - |
dc.identifier.doi | 10.32604/cmc.2023.038362 | en_US |
dc.relation.page | 4753-4766 | - |
dc.relation.journal | Computers, Materials and Continua | - |
dc.contributor.googleauthor | Shin, Hyunkyu | - |
dc.contributor.googleauthor | Ahn, Yonghan | - |
dc.contributor.googleauthor | Song, Mihwa | - |
dc.contributor.googleauthor | Gil, Heungbae | - |
dc.contributor.googleauthor | Choi, Jungsik | - |
dc.contributor.googleauthor | Lee, Sanghyo | - |
dc.relation.code | 2023005559 | - |
dc.sector.campus | E | - |
dc.sector.daehak | COLLEGE OF ENGINEERING SCIENCES[E] | - |
dc.sector.department | SCHOOL OF ARCHITECTURE | - |
dc.identifier.pid | yhahn | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.