Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 문영식 | - |
dc.date.accessioned | 2020-04-13T08:14:28Z | - |
dc.date.available | 2020-04-13T08:14:28Z | - |
dc.date.issued | 2004-07 | - |
dc.identifier.citation | ITC-CSCC :International Technical Conference on Circuits Systems, Computers and Communications, Page. 738-741 | en_US |
dc.identifier.uri | http://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE01738104&language=ko_KR | - |
dc.identifier.uri | https://repository.hanyang.ac.kr/handle/20.500.11754/149422 | - |
dc.description.abstract | In this paper, an algorithm for anchor frame detection in news video is proposed, which consists of four steps. First, the cumulative histogram method is used to detect shot boundaries in order to segment a news video into video shots. Second, skin color information is used to detect face regions in each video shot. Third, color information of upper body regions is used to extract anchor object . Then, a graph-theoretic cluster analysis algorithm is utilized to classify the news video into anchor-person shots and non-anchor shots. Experimental results have shown the effectiveness of the proposed algorithm. | en_US |
dc.description.sponsorship | This work was supported by the Korea Science and Engineering Foundation. | en_US |
dc.language.iso | en_US | en_US |
dc.publisher | 대한전자공학회 | en_US |
dc.subject | shot change detection | en_US |
dc.subject | anchor frame extraction | en_US |
dc.subject | anchor object extraction | en_US |
dc.subject | graph-theoretic clustering | en_US |
dc.title | Anchor Frame Detection in News Video Using Anchor Object Extraction | en_US |
dc.type | Article | en_US |
dc.contributor.googleauthor | Park, Ki Tae | - |
dc.contributor.googleauthor | Hwang, Doo Sun | - |
dc.contributor.googleauthor | Moon, Young Shik | - |
dc.sector.campus | E | - |
dc.sector.daehak | COLLEGE OF COMPUTING[E] | - |
dc.sector.department | DIVISION OF COMPUTER SCIENCE | - |
dc.identifier.pid | ysmoon | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.