Full metadata record

DC FieldValueLanguage
dc.contributor.author이연준-
dc.date.accessioned2023-12-21T07:32:14Z-
dc.date.available2023-12-21T07:32:14Z-
dc.date.issued2023-10-
dc.identifier.citationIEEE Access, v. 11, Page. 116721.0-116731.0-
dc.identifier.issn2169-3536-
dc.identifier.urihttps://ieeexplore.ieee.org/document/10286540en_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/187667-
dc.description.abstractChronic otitis media is characterized by recurrent infections, leading to serious complications, such as meningitis, facial palsy, and skull base osteomyelitis. Therefore, active treatment based on early diagnosis is essential. This study developed a multi-modal multi-fusion (MMMF) model that automatically diagnoses ear diseases by applying endoscopic images of the tympanic membrane (TM) and pure-tone audiometry (PTA) data to a deep learning model. The primary aim of the proposed MMMF model is adding "normal with hearing loss" as a category, and improving the diagnostic accuracy of the conventional four ear diseases: normal, TM perforation, retraction, and cholesteatoma. To this end, the MMMF model was trained on 1,480 endoscopic images of the TM and PTA data to distinguish five ear disease states: normal, TM perforation, retraction, cholesteatoma, and normal (hearing loss). It employs a feature fusion strategy of cross-attention, concatenation, and gated multi-modal units in a multi-modal architecture encompassing a convolutional neural network (CNN) and multi-layer perceptron. We expanded the classification capability to include an additional category, normal (hearing loss), thereby enhancing the diagnostic performance of extant ear disease classification. The MMMF model demonstrated superior performance when implemented with EfficientNet-B7, achieving 92.9% accuracy and 90.9% recall, thereby outpacing the existing feature fusion methods. In addition, five-fold cross-validation experiments were conducted, in which the model consistently demonstrated robust performance when endoscopic images of the TM and PTA data were applied to the deep learning model across all datasets. The proposed MMMF model is the first to include a category of normal ear disease state with hearing loss. The developed model demonstrated superior performance compared to existing CNN models and feature fusion methods. Consequently, this study substantiates the utility of simultaneously applying PTA data and endoscopic images of the TM for the automated diagnosis of ear diseases in clinical settings and validates the usefulness of the multi-fusion method. Author-
dc.description.sponsorshipThis work was supported in part by the Institute of Information and Communications Technology Planning and Evaluation (IITP) funded by the Korean Government for the Ministry of Science and ICT (MSIT), South Korea, through the Artificial Intelligence Convergence Innovation Human Resources Development funded by Hanyang University (ERICA) under Grant RS-2022-00155885; in part by the Artificial Intelligence Convergence Research Center, Hanyang University (ERICA), under Grant 2020-0-01343; in part by the National Research Foundation of Korea (NRF) funded by the Korean Government (MSIT) under Grant NRF-2022R1F1A1074999; in part by the Korea University Grant and the Medical Data-Driven Hospital Support Project through the Korea Health Information Service (KHIS)funded by the Ministry of Health and Welfare, Republic of Korea; and in part by MSIT under the ICT Challenge and Advanced Network(ICAN) of HRD Program Supervised by IITP under Grant IITP-2022-RS-2022-00156439.-
dc.languageen-
dc.publisherInstitute of Electrical and Electronics Engineers Inc.-
dc.subjectDeep learning-
dc.subjectArtificial intelligence-
dc.subjectAuditory system-
dc.subjectBiomedical imaging-
dc.subjectBones-
dc.subjectClassification algorithms-
dc.subjectComputer aided diagnosis-
dc.subjectConvolutional neural networks-
dc.subjectData models-
dc.subjectDiseases-
dc.subjectEar-
dc.subjectElectronic medical records-
dc.subjectMedia-
dc.titleToward better ear disease diagnosis: A multi-modal multi-fusion model using endoscopic images of the tympanic membrane and pure-tone audiometry-
dc.typeArticle-
dc.relation.volume11-
dc.identifier.doi10.1109/ACCESS.2023.3325346-
dc.relation.page116721.0-116731.0-
dc.relation.journalIEEE Access-
dc.contributor.googleauthorKim, Taewan-
dc.contributor.googleauthorKim, Sangyeop-
dc.contributor.googleauthorKim, Jaeyoung-
dc.contributor.googleauthorLee, Yeonjoon-
dc.contributor.googleauthorChoi, June-
dc.sector.campusE-
dc.sector.daehak소프트웨어융합대학-
dc.sector.department컴퓨터학부-
dc.identifier.pidyeonjoonlee-
Appears in Collections:
ETC[S] > ETC
Files in This Item:
109581_이연준.pdfDownload
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE