When grasping objects in a cluttered environment, a key challenge is to find appropriate poses to grasp effectively. Accordingly, several grasping algorithms based on artificial neural networks have been developed recently. However, these methods require large amounts of data for learning and high computational costs. Therefore, we propose a depth difference image-based bin-picking (DBP) algorithm that does not use a neural network. DBP predicts the grasp pose from the object and its surroundings, which are obtained through depth filtering and clustering. The object region is estimated by the density-based spatial clustering of applications with noise (DBSCAN) algorithm, and a depth difference image (DDI) that represents the depth difference between adjacent areas is defined. To validate the performance of the DBP scheme, bin-picking experiments were conducted on 45 different objects, along with bin-picking experiments in heavy clutters. DBP exhibited success rates of 78.6% and 83.3%, respectively. In addition, DBP required a computational time of approximately 1.4 s for each attempt.
Bibliographical noteFunding Information:
This work was supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Ministry of Science and ICT (MSIT, Sejong city, Republic of Korea). (No. 2018-0-00622, Robot manipulation intelligence to learn methods and procedures for handling various objects with tactile robot hands).
© 2020 by the authors.
- Machine learning
ASJC Scopus subject areas
- Materials Science(all)
- Process Chemistry and Technology
- Computer Science Applications
- Fluid Flow and Transfer Processes