56 0

OSA-CCNN: Obstructive Sleep Apnea Detection Based on a Composite Deep Convolution Neural Network Model using Single-Lead ECG signal

Title
OSA-CCNN: Obstructive Sleep Apnea Detection Based on a Composite Deep Convolution Neural Network Model using Single-Lead ECG signal
Author
강경태
Keywords
Electrocardiogram; Obstructive sleep apnea; Continuous wavelet transform; Gramin Angular Field; Convolutional neural network; Automatic feature-extraction
Issue Date
2022-12
Publisher
IEEE
Citation
2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), page. 1840-1845
Abstract
Obstructive sleep apnea (OSA) is a common sleeping issue that makes it difficult to breathe while you sleep and is linked to a number of other disorders, including cardiovascular conditions, such as hypertension and coronary heart disease. Nocturnal polysomnography (PSG) is one of the clinical diagnostic criteria for OSA, which is a painful and expensive form of diagnosis as it requires manual interpretation by experts and takes a lot of time. ECG-based techniques for diagnosing OSA have been introduced to alleviate these problems, but the most of solutions that have been put up thus far rely on feature engineering, which requires substantial specialist knowledge and expertise. In this study, we present a novel approach for classifying OSA based on a single-lead ECG signal conversion and a composite deep convolutional neural network model. The ECG signal is transformed into scalogram images with heart rate variability (HRV) characteristics and Gramian Angular Field (GAF) matrix images with temporal characteristics, incorporating the temporal properties of the ECG, to create the hybrid image dataset. The composite model contains three sub-convolutional neural networks, two of which utilize fine-tuned AlexNet and ResNet models, the third is a convolutional neural network with five residual blocks that are evaluated by a voting mechanism. The PhysioNet Apnea-ECG database was used to train and evaluate the proposed model. The results show that the proposed classifier achieved 90.93% accuracy, 83.86% sensitivity, 95.29% specificity, and 0.89 AUC on hybrid image datasets
URI
https://ieeexplore.ieee.org/document/9995675https://repository.hanyang.ac.kr/handle/20.500.11754/191035
ISBN
978-1-6654-6820-6; 978-1-6654-6819-0
DOI
10.1109/BIBM55620.2022.9995675
Appears in Collections:
ETC[S] > 연구정보
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE