49 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author윤석민-
dc.date.accessioned2024-06-10T04:37:24Z-
dc.date.available2024-06-10T04:37:24Z-
dc.date.issued2023-06-
dc.identifier.citation2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), page. 2967-2977en_US
dc.identifier.urihttps://www.computer.org/csdl/proceedings-article/cvpr/2023/012900c967/1POU6mcXqZGen_US
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/190593-
dc.description.abstractVision-language (VL) pre-training has recently gained much attention for its transferability and flexibility in novel concepts (e.g., cross-modality transfer) across various visual tasks. However, VL-driven segmentation has been under-explored, and the existing approaches still have the burden of acquiring additional training images or even segmentation annotations to adapt a VL model to downstream segmentation tasks. In this paper, we introduce a novel image-free segmentation task where the goal is to perform semantic segmentation given only a set of the target semantic categories, but without any task-specific images and annotations. To tackle this challenging task, our proposed method, coined IFSeg, generates VL-driven artificial image-segmentation pairs and updates a pre-trained VL model to a segmentation task. We construct this artificial training data by creating a 2D map of random semantic categories and another map of their corresponding word tokens. Given that a pre-trained VL model projects visual and text tokens into a common space where tokens that share the semantics are located closely, this artificially generated word map can replace the real image inputs for such a VL model. Through an extensive set of experiments, our model not only establishes an effective baseline for this novel task but also demonstrates strong performances compared to existing methods that rely on stronger supervision, such as task-specific images and segmentation masks.en_US
dc.description.sponsorshipThis work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST); No.2021-0-02068, Artificial Intelligence Innovation Hub; No.2022-0-00959, Few-shot Learning of Casual Inference in Vision and Language for Decision Making).en_US
dc.languageen_USen_US
dc.publisherIEEE Computer Societyen_US
dc.relation.ispartofseries;2967-2977-
dc.subjectComputer Vision and Pattern Recognition (cs.CV)en_US
dc.subjectArtificial Intelligence (cs.AI)en_US
dc.subjectMachine Learning (cs.LG)en_US
dc.titleIFSeg: Image-free Semantic Segmentation via Vision-Language Modelen_US
dc.typeArticleen_US
dc.identifier.doi10.1109/CVPR52729.2023.00290en_US
dc.relation.page2967-2977-
dc.contributor.googleauthorYun, Sukmin-
dc.contributor.googleauthorPark, Seong Hyeon-
dc.contributor.googleauthorSeo, Paul Hongsuck-
dc.contributor.googleauthorShin, Jinwoo-
dc.relation.code20230020-
dc.sector.campusE-
dc.sector.daehakCOLLEGE OF COMPUTING[E]-
dc.sector.departmentDEPARTMENT OF ARTIFICIAL INTELLIGENCE-
dc.identifier.pidsukminyun-
Appears in Collections:
ETC[S] > 연구정보
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE