Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 문영식 | - |
dc.date.accessioned | 2022-04-01T02:37:19Z | - |
dc.date.available | 2022-04-01T02:37:19Z | - |
dc.date.issued | 2021-11 | - |
dc.identifier.citation | 대한전자공학회 학술대회. 2021-11 2021(11):384-387 | en_US |
dc.identifier.uri | https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11027603 | - |
dc.identifier.uri | https://repository.hanyang.ac.kr/handle/20.500.11754/169623 | - |
dc.description.abstract | To prevent violent crimes, surveillance cameras have been deployed in public places. But, it is time and labor consuming to manually monitor a large amount of video data from surveillance cameras. Therefore, automatically detecting violent behaviors from video is essential. Existing methods tend to misclassify moving objects as violence. In order to improve this drawback, we propose to use spatial and channel features more efficiently using attention modules. The proposed method is based on the Flow Gated Network, 3D convolution layer and CBAM module. Experimental results have shown the proposed method achieves 1% improvement in accuracy, compared to the existing method. | en_US |
dc.description.sponsorship | 본 연구는 과학기술정보통신부 및 정보통신기획평가원의 SW 중심대학지원사업의 연구결과로 수행되었으며(2018-0-00192) 연구 지원에 감사드립니다. | en_US |
dc.language.iso | ko_KR | en_US |
dc.publisher | 대한전자공학회 | en_US |
dc.title | 어텐션 모듈을 이용한 딥러닝 기반의 폭력 탐지 | en_US |
dc.title.alternative | Violence Detection Using Deep Neural Network with Attention Modules | en_US |
dc.type | Article | en_US |
dc.relation.page | 384-387 | - |
dc.contributor.googleauthor | 강, 경원 | - |
dc.contributor.googleauthor | 김, 지훈 | - |
dc.contributor.googleauthor | 김, 해문 | - |
dc.contributor.googleauthor | 박, 경리 | - |
dc.contributor.googleauthor | 서, 지원 | - |
dc.contributor.googleauthor | 문, 영식 | - |
dc.sector.campus | E | - |
dc.sector.daehak | COLLEGE OF COMPUTING[E] | - |
dc.sector.department | SCHOOL OF COMPUTER SCIENCE | - |
dc.identifier.pid | ysmoon | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.