Maximum entropy scaled super pixels segmentation for multi-object detection and scene recognition via deep belief network

Title
Maximum entropy scaled super pixels segmentation for multi-object detection and scene recognition via deep belief network
Author
김기범
Keywords
Bag of features; Deep belief network; Entropy-scaled segmentation; Super-pixels
Issue Date
2022-09-20
Publisher
SPRINGER
Citation
MULTIMEDIA TOOLS AND APPLICATIONS, v. 82, NO 9, Page. 13401-13430
Abstract
Recent advances in visionary technologies impacted multi-object recognition and scene understanding. Such scene-understanding tasks are a demanding part of several technologies such as augmented reality based scene integration, robotic navigation, autonomous driving and tourist guide applications. Incorporating visual information in contextually unified segments, super-pixel-based approaches significantly mitigate the clutter, which is normal in pixel wise frameworks during scene understanding. Super-pixels allow customized shapes and variable size patches of connected components to be obtained. Furthermore, the computational time for these segmentation approaches can significantly decreased due to the reduced number of super-pixel target clusters. Hence, the super pixel-based approaches are more commonly used in robotics, computer vision and other intelligent systems. In this paper, we propose a Maximum Entropy scaled Super-Pixels (MEsSP) Segmentation method that encapsulates super-pixel segmentation based on an Entropy Model and utilizes local energy terms to label the pixels. Initially, after acquisition and pre-processing, image is segmented by two different methods: Fuzzy C-Means (FCM) and MEsSP. Then, to extract the features from these segmented objects, the dynamic geometrical features, fast Fourier transform (FFT), blob extraction, Maximally Stable Extremal Regions (MSER) and KAZE features are extracted using the bag of features approach. Then, to categorize the objects, multiple kernel learning is applied. Finally, a deep belief network (DBN) assigns the relevant labels to the scenes based on the categorized objects, intersection over union scores and dice similarity coefficient. The experimental results regarding multiple objects recognition accuracy, precision, recall and F1 scores over PASCAL VOC, Caltech 101 and UIUC Sports datasets show a remarkable performance. In addition, the evaluation of proposed scene recognition method over these benchmark datasets outperforms the state of the art (SOTA) methods.
URI
https://information.hanyang.ac.kr/#/eds/detail?an=edssjs.86A3EB14&dbId=edssjshttps://repository.hanyang.ac.kr/handle/20.500.11754/190002
ISSN
1380-7501; 1573-7721
DOI
10.1007/s11042-022-13717-y
Appears in Collections:
COLLEGE OF COMPUTING[E](소프트웨어융합대학) > MEDIA, CULTURE, AND DESIGN TECHNOLOGY(ICT융합학부) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE