273 0

Full metadata record

DC FieldValueLanguage
dc.contributor.advisor조인휘-
dc.contributor.author안준항-
dc.date.accessioned2022-09-27T16:03:19Z-
dc.date.available2022-09-27T16:03:19Z-
dc.date.issued2022. 8-
dc.identifier.urihttp://hanyang.dcollection.net/common/orgView/200000626649en_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/174214-
dc.description.abstractMore and more deep learning researchers believe that deep learning models can perform well in tasks such as computer vision and anomaly detection as deep learning models develop. But the complex parameters of deep learning models make it impossible for those using them to grasp how the deep learning models are making predictions, therefore most companies still have a lack of trust in them. As a result of these issues, it has become vital to explain deep learning models. Current artificial intelligence explanation approaches, on the other hand, have limitations in terms of describing both the inference process and model prediction outputs. They normally only display the components of the model that are critical to the model's predictions. Because there is no way to interact with the explanation, it is difficult to validate and comprehend how the model works. When employing the model, this poses a significant risk. The explanation overlooks the semantics of the topic at hand, exacerbating the problem. In this paper, we propose Specific-Input LIME, a novel XAI method for explaining tabular data-based deep learning models that combine Specific-Input process into the LIME method. The Specific-Input process used feature importance and PDP plot to select 'WHAT' and 'HOW' the features influence the deep learning models. To give a more detailed explanation, the Specific-Input LIME method replaces some of the procedures in the LIME method with feature importance and the PDP plot. In our experimental part we focus on tabular data based deep learning models as our black-box model. We firstly get a basic explanation of the data by simulating the behavior used, and secondly we use our method to get a sense of 'WHAT' features the deep learning model focuses on and 'HOW' they affect the predictions of the model. The analysis of the experimental results shows that the use of our method allows for a more detailed explanation than the LIME method and complements the LIME method in its use.-
dc.publisher한양대학교-
dc.titleSpecific Input-LIME Explanations for Tabular Data based on Deep Learning Models-
dc.typeTheses-
dc.contributor.googleauthor안준항-
dc.sector.campusS-
dc.sector.daehak대학원-
dc.sector.department컴퓨터·소프트웨어학과-
dc.description.degreeMaster-
Appears in Collections:
GRADUATE SCHOOL[S](대학원) > COMPUTER SCIENCE(컴퓨터·소프트웨어학과) > Theses (Master)
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE