Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 임창환 | - |
dc.date.accessioned | 2022-12-06T06:39:37Z | - |
dc.date.available | 2022-12-06T06:39:37Z | - |
dc.date.issued | 2022-10 | - |
dc.identifier.citation | Expert Systems with Applications, v. 203, article no. 117574, Page. 1-13 | en_US |
dc.identifier.issn | 0957-4174;1873-6793 | en_US |
dc.identifier.uri | https://www.sciencedirect.com/science/article/pii/S0957417422008880?via%3Dihub | en_US |
dc.identifier.uri | https://repository.hanyang.ac.kr/handle/20.500.11754/178040 | - |
dc.description.abstract | Generative adversarial networks (GANs) have shown promising performance in image-to-image translation. Inspired by StarGAN v2, which was introduced to address the multidomain image-to-image translation, the authors propose a novel multidomain signal-to-signal translation method to generate artificial steady-state visual evoked potential (SSVEP) signals from resting electroencephalograms (EEGs). The proposed StarGAN-based signal-to-signal translation model was trained using EEG data acquired from three subjects and could successfully generate sufficient numbers of artificial SSVEP signals of 15 test participants using resting EEG data acquired over an extremely short time period (∼16 s) from each test participant. The possibility of improving the performance of SSVEP-based brain–computer interfaces (BCIs) was investigated by incorporating the artificially generated SSVEP signals and Combined-ECCA, which is proposed in this study as an extended version of combined canonical correlation analysis (Combined-CCA). Fifteen healthy individuals participated in the SSVEP-based BCI study to distinguish four visual stimuli with different flickering frequencies. To evaluate the degree of performance improvement, the average classification accuracy and information transfer rate (ITR) obtained using the proposed methods were compared with those obtained using filter bank CCA (FBCCA), a state-of-the-art training-free method for SSVEP-based BCIs. The performances in terms of the classification accuracy and ITR of SSVEP-based BCIs were significantly improved by using the proposed methods (95.47% and 40.09 bit/min) compared with those using FBCCA (92.03% and 31.80 bit/min, p < 0.05). In addition, more than four individual training trials per class were necessary to achieve better classification accuracy than that of the proposed approach if the conventional individual training approach was employed. | en_US |
dc.description.sponsorship | This work was supported by the Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea Government, Ministry of Science and ICT (MSIT) under Grant 2017-0-00432 and Grant 2020-0-01373. | en_US |
dc.language | en | en_US |
dc.publisher | Elsevier Ltd | en_US |
dc.subject | Brain -computer interface | en_US |
dc.subject | Steady-state visual evoked potential | en_US |
dc.subject | Generative adversarial networks | en_US |
dc.subject | Multidomain signal-to-signal translation | en_US |
dc.title | Novel Signal-to-Signal translation method based on StarGAN to generate artificial EEG for SSVEP-based brain-computer interfaces | en_US |
dc.type | Article | en_US |
dc.relation.volume | 203 | - |
dc.identifier.doi | 10.1016/j.eswa.2022.117574 | en_US |
dc.relation.page | 1-13 | - |
dc.relation.journal | Expert Systems with Applications | - |
dc.contributor.googleauthor | Kwon, Jinuk | - |
dc.contributor.googleauthor | Im, Chang-Hwan | - |
dc.sector.campus | S | - |
dc.sector.daehak | 공과대학 | - |
dc.sector.department | 바이오메디컬공학전공 | - |
dc.identifier.pid | ich | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.