Abstract
The main concern of user-guided segmentation (UGS) is to achieve high segmentation accuracy with minimal user interaction. A novel convolutional neural network (CNN)-based UGS method is proposed, which employs a single click as the user interaction. In the proposed method, the input image in the Cartesian coordinate system is first converted into the polar transformed image with the user-guided point (UGP) as the origin of the polar coordinate system. The transformed image not only effectively delivers the UGP to the CNN, but also enables a single-scale convolution kernel to act as a multi-scale kernel, whose receptive field in the Cartesian coordinate system is altered based on the UGP without any extra parameters. In addition, a feature selection module (FSM) is introduced and utilised to additionally extract radial and angular features from the polar transformed image. Experimental results demonstrate that the proposed CNN using the polar transformed image improves the segmentation accuracy (mean intersection over union) by 3.69% on PASCAL VOC 2012 dataset compared with the CNN using the Cartesian coordinate image. The FSM achieves additional performance improvement of 1.32%. Moreover, the proposed method outperforms the conventional non-CNN-based UGS methods by 12.61% on average.
Original language | English |
---|---|
Pages (from-to) | 1321-1322 |
Number of pages | 2 |
Journal | Electronics Letters |
Volume | 54 |
Issue number | 23 |
DOIs | |
Publication status | Published - 2018 Nov 15 |
Bibliographical note
Publisher Copyright:© The Institution of Engineering and Technology 2018.
ASJC Scopus subject areas
- Electrical and Electronic Engineering