454 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author장동표-
dc.date.accessioned2018-03-20T06:39:42Z-
dc.date.available2018-03-20T06:39:42Z-
dc.date.issued2013-03-
dc.identifier.citation의공학회지, 2013, 34(1), P.8-13en_US
dc.identifier.issn1225-505X-
dc.identifier.issn1229-0807-
dc.identifier.urihttp://koreascience.or.kr/article/ArticleFullRecord.jsp?cn=OOSCB@_2013_v34n1_8-
dc.description.abstractPersons with sensorineural hearing impairment have troubles in hearing at noisy environments because of their deteriorated hearing levels and low-spectral resolution of the auditory system and therefore, they use hearing aids to compensate weakened hearing abilities. Various algorithms for hearing loss compensation and environmental noise reduction have been implemented in the hearing aid; however, the performance of these algorithms vary in accordance with external sound situations and therefore, it is important to tune the operation of the hearing aid appropriately in accordance with a wide variety of sound situations. In this study, a sound classification algorithm that can be applied to the hearing aid was suggested. The proposed algorithm can classify the different types of speech situations into four categories: 1) speech-only, 2) noise-only, 3) speech-in-noise, and 4) music-only. The proposed classification algorithm consists of two sub-parts: a feature extractor and a speech situation classifier. The former extracts seven characteristic features - short time energy and zero crossing rate in the time domain; spectral centroid, spectral flux and spectral roll-off in the frequency domain; mel frequency cepstral coefficients and power values of mel bands - from the recent input signals of two microphones, and the latter classifies the current speech situation. The experimental results showed that the proposed algorithm could classify the kinds of speech situations with an accuracy of over 94.4%. Based on these results, we believe that the proposed algorithm can be applied to the hearing aid to improve speech intelligibility in noisy environments.en_US
dc.description.sponsorship이 논문은 2012년도 지식경제부 바이오의료기기 전략기술개발사업(10031764)과 서울시 산학연 협력사업(SS100022) 의 지원을 받아 수행되었음.이 논문은 2012년도 정부(교육과학기술부)의 재원으로 한국연구재단의 기초연구사업 지원을 받아 수행되었음(2012R1A1A2041508).en_US
dc.language.isoko_KRen_US
dc.publisher대한의용생체공학회 / The Korea Society of Medical and Biological Engineeringen_US
dc.subjecthearing aidsen_US
dc.subjectclassificationen_US
dc.subjectartificial neural networken_US
dc.subjecthearing impaireden_US
dc.title인공 신경망을 이용한 보청기용 실시간 환경분류 알고리즘en_US
dc.typeArticleen_US
dc.relation.no1-
dc.relation.volume34-
dc.identifier.doi10.9718/JBER.2013.34.1.8-
dc.relation.page8-13-
dc.relation.journal의공학회지-
dc.contributor.googleauthor서상완-
dc.contributor.googleauthor육순현-
dc.contributor.googleauthor남경원-
dc.contributor.googleauthor한종희-
dc.contributor.googleauthor권세윤-
dc.contributor.googleauthor홍성화-
dc.contributor.googleauthor김동욱-
dc.contributor.googleauthor이상민-
dc.contributor.googleauthor장동표-
dc.contributor.googleauthor김인영-
dc.contributor.googleauthorSeo, Sangwan-
dc.contributor.googleauthorYook, Sunhyun-
dc.contributor.googleauthorNam, KyoungWon-
dc.contributor.googleauthorHan, Jonghee-
dc.contributor.googleauthorKwon, SeeYoun-
dc.contributor.googleauthorHong, SungHwa-
dc.contributor.googleauthorKim, Dongwook-
dc.contributor.googleauthorLee, Sangmin-
dc.contributor.googleauthorJang, DongPyo-
dc.contributor.googleauthorKim, InYoung-
dc.relation.code2012214535-
dc.sector.campusS-
dc.sector.daehakGRADUATE SCHOOL OF BIOMEDICAL SCIENCE AND ENGINEERING[S]-
dc.identifier.piddongpjang-
Appears in Collections:
GRADUATE SCHOOL OF BIOMEDICAL SCIENCE AND ENGINEERING[S](의생명공학전문대학원) > ETC
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE