272 0

Specific Input-LIME Explanations for Tabular Data based on Deep Learning Models

Title
Specific Input-LIME Explanations for Tabular Data based on Deep Learning Models
Author
안준항
Advisor(s)
조인휘
Issue Date
2022. 8
Publisher
한양대학교
Degree
Master
Abstract
More and more deep learning researchers believe that deep learning models can perform well in tasks such as computer vision and anomaly detection as deep learning models develop. But the complex parameters of deep learning models make it impossible for those using them to grasp how the deep learning models are making predictions, therefore most companies still have a lack of trust in them. As a result of these issues, it has become vital to explain deep learning models. Current artificial intelligence explanation approaches, on the other hand, have limitations in terms of describing both the inference process and model prediction outputs. They normally only display the components of the model that are critical to the model's predictions. Because there is no way to interact with the explanation, it is difficult to validate and comprehend how the model works. When employing the model, this poses a significant risk. The explanation overlooks the semantics of the topic at hand, exacerbating the problem. In this paper, we propose Specific-Input LIME, a novel XAI method for explaining tabular data-based deep learning models that combine Specific-Input process into the LIME method. The Specific-Input process used feature importance and PDP plot to select 'WHAT' and 'HOW' the features influence the deep learning models. To give a more detailed explanation, the Specific-Input LIME method replaces some of the procedures in the LIME method with feature importance and the PDP plot. In our experimental part we focus on tabular data based deep learning models as our black-box model. We firstly get a basic explanation of the data by simulating the behavior used, and secondly we use our method to get a sense of 'WHAT' features the deep learning model focuses on and 'HOW' they affect the predictions of the model. The analysis of the experimental results shows that the use of our method allows for a more detailed explanation than the LIME method and complements the LIME method in its use.
URI
http://hanyang.dcollection.net/common/orgView/200000626649https://repository.hanyang.ac.kr/handle/20.500.11754/174214
Appears in Collections:
GRADUATE SCHOOL[S](대학원) > COMPUTER SCIENCE(컴퓨터·소프트웨어학과) > Theses (Master)
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE