167 85

Full metadata record

DC FieldValueLanguage
dc.contributor.author장준혁-
dc.date.accessioned2022-09-05T01:19:48Z-
dc.date.available2022-09-05T01:19:48Z-
dc.date.issued2020-11-
dc.identifier.citationSENSORS, v. 20, no. 22, article no. 6493, page. 1-17en_US
dc.identifier.issn1424-8220-
dc.identifier.urihttps://www.mdpi.com/1424-8220/20/22/6493-
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/172755-
dc.description.abstractIn this paper, we propose a multi-channel cross-tower with attention mechanisms in latent domain network (Multi-TALK) that suppresses both the acoustic echo and background noise. The proposed approach consists of the cross-tower network, a parallel encoder with an auxiliary encoder, and a decoder. For the multi-channel processing, a parallel encoder is used to extract latent features of each microphone, and the latent features including the spatial information are compressed by a 1D convolution operation. In addition, the latent features of the far-end are extracted by the auxiliary encoder, and they are effectively provided to the cross-tower network by using the attention mechanism. The cross tower network iteratively estimates the latent features of acoustic echo and background noise in each tower. To improve the performance at each iteration, the outputs of each tower are transmitted as the input for the next iteration of the neighboring tower. Before passing through the decoder, to estimate the near-end speech, attention mechanisms are further applied to remove the estimated acoustic echo and background noise from the compressed mixture to prevent speech distortion by over-suppression. Compared to the conventional algorithms, the proposed algorithm effectively suppresses the acoustic echo and background noise and significantly lowers the speech distortion.en_US
dc.description.sponsorshipThis work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No. 2017-0-00474, Intelligent Signal Processing for AI Speaker Voice Guardian).en_US
dc.language.isoenen_US
dc.publisherMDPIen_US
dc.subjectacoustic echo suppressionen_US
dc.subjectnoise suppressionen_US
dc.subjectattention mechanismen_US
dc.subjecttemporal convolutional networken_US
dc.subjectcross-toweren_US
dc.titleMulti-TALK: Multi-Microphone Cross-Tower Network for Jointly Suppressing Acoustic Echo and Background Noiseen_US
dc.typeArticleen_US
dc.relation.no22-
dc.relation.volume20-
dc.identifier.doi10.3390/s20226493-
dc.relation.page1-17-
dc.relation.journalSENSORS-
dc.contributor.googleauthorPark, Song-Kyu-
dc.contributor.googleauthorChang, Joon-Hyuk-
dc.relation.code2020053568-
dc.sector.campusS-
dc.sector.daehakCOLLEGE OF ENGINEERING[S]-
dc.sector.departmentSCHOOL OF ELECTRONIC ENGINEERING-
dc.identifier.pidjchang-
dc.identifier.orcidhttps://orcid.org/0000-0003-2610-2323-


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE