82 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author김영훈-
dc.date.accessioned2024-05-02T04:52:46Z-
dc.date.available2024-05-02T04:52:46Z-
dc.date.issued2024-03-19-
dc.identifier.citationAPPLIED SCIENCES-BASEL, v. 14, NO 6, Page. 1-16en_US
dc.identifier.issn2076-3417en_US
dc.identifier.urihttps://information.hanyang.ac.kr/#/eds/detail?an=001191326500001&dbId=edswscen_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/190125-
dc.description.abstractDeep learning-based segmentation models have made a profound impact on medical procedures, with U-Net based computed tomography (CT) segmentation models exhibiting remarkable performance. Yet, even with these advances, these models are found to be vulnerable to adversarial attacks, a problem that equally affects automatic CT segmentation models. Conventional adversarial attacks typically rely on adding noise or perturbations, leading to a compromise between the success rate of the attack and its perceptibility. In this study, we challenge this paradigm and introduce a novel generation of adversarial attacks aimed at deceiving both the target segmentation model and medical practitioners. Our approach aims to deceive a target model by altering the texture statistics of an organ while retaining its shape. We employ a real-time style transfer method, known as the texture reformer, which uses adaptive instance normalization (AdaIN) to change the statistics of an image's feature.To induce transformation, we modify the AdaIN, which typically aligns the source and target image statistics. Through rigorous experiments, we demonstrate the effectiveness of our approach. Our adversarial samples successfully pass as realistic in blind tests conducted with physicians, surpassing the effectiveness of contemporary techniques. This innovative methodology not only offers a robust tool for benchmarking and validating automated CT segmentation systems but also serves as a potent mechanism for data augmentation, thereby enhancing model generalization. This dual capability significantly bolsters advancements in the field of deep learning-based medical and healthcare segmentation models.en_US
dc.description.sponsorshipThis research was financially supported by the Ministry of Trade, Industry and Energy (MOTIE) and Korea Institute for Advancement of Technology (KIAT) through the International Cooperative R&D program (Project No. P0025661). Additionally this work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. RS-2022-00155885, Artificial Intelligence Convergence Innovation Human Resources Development (Hanyang University ERICA)). It was also supported by the BK21 FOUR (Fostering Outstanding Universities for Research) funded by the Ministry of Education (MOE, Korea) and National Research Foundation of Korea (NRF).en_US
dc.languageen_USen_US
dc.publisherMDPIen_US
dc.relation.ispartofseriesv. 14, NO 6;1-16-
dc.subjectadversarial attacksen_US
dc.subjectrealistic adversarial samplesen_US
dc.subjectdeep learning-based segmentationen_US
dc.subjectdata augmentationen_US
dc.subjectcomputed tomography (CT) segmentationen_US
dc.titleAdversarial Attacks on Medical Segmentation Model via Transformation of Feature Statisticsen_US
dc.typeArticleen_US
dc.relation.no6-
dc.relation.volume14-
dc.identifier.doi10.3390/app14062576en_US
dc.relation.page1-16-
dc.relation.journalAPPLIED SCIENCES-BASEL-
dc.contributor.googleauthorLee, Woonghee-
dc.contributor.googleauthorJu, Mingeon-
dc.contributor.googleauthorSim, Yura-
dc.contributor.googleauthorJung, Young Kul-
dc.contributor.googleauthorKim, Tae Hyung-
dc.contributor.googleauthorKim, Younghoon-
dc.relation.code2024000222-
dc.sector.campusE-
dc.sector.daehakCOLLEGE OF COMPUTING[E]-
dc.sector.departmentDEPARTMENT OF ARTIFICIAL INTELLIGENCE-
dc.identifier.pidnongaussian-
Appears in Collections:
ETC[S] > 연구정보
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE