Advances in reinforcement learning algorithms allow robots to learn complex tasks such as object manipulation. However, most of these tasks have been implemented only in simulations. In addition, it is difficult to apply reinforcement learning in the real world because of the difficulty in obtaining the state details for the learning process, such as the position of an object, and collecting large amount of data. Moreover, existing reinforcement learning algorithms are designed to learn a single task, so there is a limit to learning multiple tasks. To address these problems, a novel system is proposed in this study for applications to the real world after learning multiple tasks in the simulation. First, a generative model that converts real-world images into simulation images is proposed, so that simulation-to-real-world transfer wherein the learning results from simulation can be applied directly to the real-world scenarios is possible. Additionally, to learn multiple tasks using images, a reinforcement learning algorithm combining variational auto-encoder and asymmetric actor-critic is developed. To verify this system, experiments are conducted in which the algorithms learned in the simulation are applied to the real world to achieve a success rate of 83.8%; this shows that the proposed system can perform multiple manipulation tasks successfully.
Bibliographical noteFunding Information:
This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2018-0-00622)
© 2021, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.
- Machine learning
- Object manipulation
- Reinforcement learning
ASJC Scopus subject areas
- Computational Mechanics
- Engineering (miscellaneous)
- Mechanical Engineering
- Artificial Intelligence