134 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author임창환-
dc.date.accessioned2022-12-06T06:39:37Z-
dc.date.available2022-12-06T06:39:37Z-
dc.date.issued2022-10-
dc.identifier.citationExpert Systems with Applications, v. 203, article no. 117574, Page. 1-13en_US
dc.identifier.issn0957-4174;1873-6793en_US
dc.identifier.urihttps://www.sciencedirect.com/science/article/pii/S0957417422008880?via%3Dihuben_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/178040-
dc.description.abstractGenerative adversarial networks (GANs) have shown promising performance in image-to-image translation. Inspired by StarGAN v2, which was introduced to address the multidomain image-to-image translation, the authors propose a novel multidomain signal-to-signal translation method to generate artificial steady-state visual evoked potential (SSVEP) signals from resting electroencephalograms (EEGs). The proposed StarGAN-based signal-to-signal translation model was trained using EEG data acquired from three subjects and could successfully generate sufficient numbers of artificial SSVEP signals of 15 test participants using resting EEG data acquired over an extremely short time period (∼16 s) from each test participant. The possibility of improving the performance of SSVEP-based brain–computer interfaces (BCIs) was investigated by incorporating the artificially generated SSVEP signals and Combined-ECCA, which is proposed in this study as an extended version of combined canonical correlation analysis (Combined-CCA). Fifteen healthy individuals participated in the SSVEP-based BCI study to distinguish four visual stimuli with different flickering frequencies. To evaluate the degree of performance improvement, the average classification accuracy and information transfer rate (ITR) obtained using the proposed methods were compared with those obtained using filter bank CCA (FBCCA), a state-of-the-art training-free method for SSVEP-based BCIs. The performances in terms of the classification accuracy and ITR of SSVEP-based BCIs were significantly improved by using the proposed methods (95.47% and 40.09 bit/min) compared with those using FBCCA (92.03% and 31.80 bit/min, p < 0.05). In addition, more than four individual training trials per class were necessary to achieve better classification accuracy than that of the proposed approach if the conventional individual training approach was employed.en_US
dc.description.sponsorshipThis work was supported by the Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea Government, Ministry of Science and ICT (MSIT) under Grant 2017-0-00432 and Grant 2020-0-01373.en_US
dc.languageenen_US
dc.publisherElsevier Ltden_US
dc.subjectBrain -computer interfaceen_US
dc.subjectSteady-state visual evoked potentialen_US
dc.subjectGenerative adversarial networksen_US
dc.subjectMultidomain signal-to-signal translationen_US
dc.titleNovel Signal-to-Signal translation method based on StarGAN to generate artificial EEG for SSVEP-based brain-computer interfacesen_US
dc.typeArticleen_US
dc.relation.volume203-
dc.identifier.doi10.1016/j.eswa.2022.117574en_US
dc.relation.page1-13-
dc.relation.journalExpert Systems with Applications-
dc.contributor.googleauthorKwon, Jinuk-
dc.contributor.googleauthorIm, Chang-Hwan-
dc.sector.campusS-
dc.sector.daehak공과대학-
dc.sector.department바이오메디컬공학전공-
dc.identifier.pidich-
Appears in Collections:
COLLEGE OF ENGINEERING[S](공과대학) > ELECTRICAL AND BIOMEDICAL ENGINEERING(전기·생체공학부) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE