271 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author장준혁-
dc.date.accessioned2018-08-30T00:43:29Z-
dc.date.available2018-08-30T00:43:29Z-
dc.date.issued2016-07-
dc.identifier.citationCOMPUTER SPEECH AND LANGUAGE (2016), v. 38, Page. 1-12en_US
dc.identifier.issn0885-2308-
dc.identifier.issn1095-8363-
dc.identifier.urihttps://www.sciencedirect.com/science/article/pii/S0885230815001072?via%3Dihub-
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/74576-
dc.description.abstractIn this paper, we investigate the ensemble of deep neural networks (DNNs) by using an acoustic environment classification (AEC) technique for the statistical model-based voice activity detection (VAD). From an investigation of the statistical model-based VAD, it is known that the traditional decision rule is based on the geometric mean of the likelihood ratio or the support vector machine (SVM), which is a shallow model with zero or one hidden layer. Since the shallow models cannot take an advantage of the diversity of the space distribution of features, in the training step, we basically build the multiple DNNs according the different noise types by employing the parameters of the statistical model-based VAD algorithm. In addition, the separate DNN is designed for the AEC algorithm in order to choose the best DNN for each noise. In the on-line noise-aware VAD step, the AEC is first performed on a frame-by-frame basis using the separate DNN so the a posteriori probabilities to identify noise are obtained. Once the probabilities are achieved for each noise, the environmental knowledge is contributed to allow us to combine the speech presence probabilities which are derived from the ensemble of the DNNs trained for the individual noise. Our approach for VAD was evaluated in terms of objective measures and showed significant improvement compared to the conventional algorithm. (C) 2015 Elsevier Ltd. All rights reserved.en_US
dc.description.sponsorshipThis work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. 2014R1A2A1A10049735). This work was also supported by the ICT R&D program of MSIP/IITP [R0126-15-1119, Development of a solution for situation-awareness based on the analysis of speech and environmental sounds].en_US
dc.language.isoenen_US
dc.publisherACADEMIC PRESS LTD- ELSEVIER SCIENCE LTDen_US
dc.subjectVoice activity detectionen_US
dc.subjectStatistical modelen_US
dc.subjectAcoustic environment classificationen_US
dc.subjectDeep neural networken_US
dc.subjectEnsembleen_US
dc.titleEnsemble of deep neural networks using acoustic environment classification for statistical model-based voice activity detectionen_US
dc.typeArticleen_US
dc.relation.volume38-
dc.identifier.doi10.1016/j.csl.2015.11.003-
dc.relation.page1-12-
dc.relation.journalCOMPUTER SPEECH AND LANGUAGE-
dc.contributor.googleauthorHwang, Inyoung-
dc.contributor.googleauthorPark, Hyung-Min-
dc.contributor.googleauthorChang, Joon-Hyuk-
dc.relation.code2016011173-
dc.sector.campusS-
dc.sector.daehakCOLLEGE OF ENGINEERING[S]-
dc.sector.departmentDEPARTMENT OF ELECTRONIC ENGINEERING-
dc.identifier.pidjchang-
Appears in Collections:
COLLEGE OF ENGINEERING[S](공과대학) > ELECTRONIC ENGINEERING(융합전자공학부) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE