In a cluttered environment in which objects are lying very closely to each other, the arranging motion is required before the robot attempts to grasp the target object. Thus, a robot must determine which motion to perform based on a given situation. This study presents an approach to learning a decision-making ability for the robot to grasp the target object after rearranging the surrounding objects obstructing the target object. The learning is performed in the virtual environment, and the image, which is an input of the deep Q-network, is preprocessed to directly apply the results of the learning to the real environment. That is, the difference between the two environments is minimized by making the states obtained from the virtual and real environments similar to each other. In addition, image preprocessing can be used to generalize the results of learning so that the robot can determine the appropriate actions to take when objects that were not used for learning are given. A hierarchical structure, which consists of high-level and low-level motion selectors, is used for the learning: the former determines the grasping or pushing actions while the latter determines how to perform such selected actions. The results of various experiments show that the proposed scheme is effective in grasping the target object in a cluttered environment without the need for any additional learning in the real world.
|Number of pages
|International Journal of Control, Automation and Systems
|Published - 2020 Sept 1
Bibliographical noteFunding Information:
This work was supported by IITP grant funded by the Korea Government MSIT (No. 2018-0-00622).
© 2020, ICROS, KIEE and Springer.
- AI-based application
- reinforcement learning
- sim-to-real transfer
ASJC Scopus subject areas
- Control and Systems Engineering
- Computer Science Applications