332 0

Full metadata record

DC FieldValueLanguage
dc.contributor.advisor정재호-
dc.contributor.author하지연-
dc.date.accessioned2023-05-11T12:09:15Z-
dc.date.available2023-05-11T12:09:15Z-
dc.date.issued2023. 2-
dc.identifier.urihttp://hanyang.dcollection.net/common/orgView/200000653434en_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/180305-
dc.description.abstractIn a multi-speaker environment, humans can focus on the desired speech while ignoring other speeches, which is called cocktail party effect. However, individuals with hearing impairment have difficulty to selectively attended to one speech in the multi-speaker environment. Auditory attention decoding (AAD) has been developed to determine the attending sound source with electroencephalography (EEG) signals by estimating the correlation between the reconstructed speech envelope and the envelope of original speeches. It can lead to the so-called neuro-steered hearing aids that amplify the desired sound according to the user’s cognition. Despite significant advances in AAD technologies, EEG-based AAD studies also have some practical limitations for application in real-life usage. In this study, I address two limitations of current AAD studies. Firstly, to increase the usability of AAD in real-life, it is necessary to investigate whether AAD can be performed well in real time under simple and low cost EEG settings. In this regard, the current work developed the AAD-capable system using low cost and open source hardware, and then validated the AAD system by conducting AAD task in real time outside the laboratory. Second, since people listen to sounds of varying volumes simultaneously in everyday conversation, it is a question of whether AAD is possible in the situation where there are speeches with diverse sound level. The present work aimed to explore the effect of the difference in sound level between two competing speeches in a dichotic listening paradigm by performing the AAD task under four different sound level conditions (Most Comfortable Level (MCL), MCL-20dBA, Sound level for speech intelligibility of 90% and 50%). For the first experiment, this work showed an average online decoder accuracy of up to 78% across nine participants. The results of this work suggest that online AAD can be implemented well under simple and low-cost settings. For the second experiment, there was no difference in AAD performance between the four sound levels conditions. This work identified that difference of sound level between competing speakers had no effect on AAD. These results would help expand the applicability of AAD in neuro-steered hearing aids.-
dc.publisher한양대학교-
dc.titleToward Realization of Neuro-steered Hearing Aids Using Auditory Attention Decoding-
dc.title.alternative청각 주목 디코더를 이용한 신경 조종 보청기 구현을 위한 연구-
dc.typeTheses-
dc.contributor.googleauthor하지연-
dc.sector.campusS-
dc.sector.daehak대학원-
dc.sector.departmentHY-KIST-
dc.description.degreeMaster-
Appears in Collections:
GRADUATE SCHOOL[S](대학원) > DEPARTMENT OF HY-KIST BIO-CONVERGENCE(HY-KIST 바이오융합학과) > Theses (Master)
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE