TY - GEN
T1 - Semi-Autonomous Teleoperation via Learning Non-Prehensile Manipulation Skills
AU - Park, Sangbeom
AU - Chai, Yoonbyung
AU - Park, Sunghyun
AU - Park, Jeongeun
AU - Lee, Kyungjae
AU - Choi, Sungjoon
N1 - Funding Information:
This work was supported by Samsung Electronics (IO201230-08278-01) and Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2019-0-00079, Artificial Intelligence Graduate School Program (Korea University)).
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - In this paper, we present a semi-autonomous teleoperation framework for a pick-and-place task using an RGB-D sensor. In particular, we assume that the target object is located in a cluttered environment where both prehensile grasping and non-prehensile manipulation are combined for efficient teleoperation. A trajectory-based reinforcement learning is utilized for learning the non-prehensile manipulation to rearrange the objects for enabling direct grasping. From the depth image of the cluttered environment and the location of the goal object, the learned policy can provide multiple options of non-prehensile manipulation to the human operator. We carefully design a reward function for the rearranging task where the policy is trained in a simulational environment. Then, the trained policy is transferred to a real-world and evaluated in a number of real-world experiments with the varying number of objects where we show that the proposed method outperforms manual keyboard control in terms of the time duration for the grasping.
AB - In this paper, we present a semi-autonomous teleoperation framework for a pick-and-place task using an RGB-D sensor. In particular, we assume that the target object is located in a cluttered environment where both prehensile grasping and non-prehensile manipulation are combined for efficient teleoperation. A trajectory-based reinforcement learning is utilized for learning the non-prehensile manipulation to rearrange the objects for enabling direct grasping. From the depth image of the cluttered environment and the location of the goal object, the learned policy can provide multiple options of non-prehensile manipulation to the human operator. We carefully design a reward function for the rearranging task where the policy is trained in a simulational environment. Then, the trained policy is transferred to a real-world and evaluated in a number of real-world experiments with the varying number of objects where we show that the proposed method outperforms manual keyboard control in terms of the time duration for the grasping.
UR - http://www.scopus.com/inward/record.url?scp=85136327695&partnerID=8YFLogxK
U2 - 10.1109/ICRA46639.2022.9811823
DO - 10.1109/ICRA46639.2022.9811823
M3 - Conference contribution
AN - SCOPUS:85136327695
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 9295
EP - 9301
BT - 2022 IEEE International Conference on Robotics and Automation, ICRA 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 39th IEEE International Conference on Robotics and Automation, ICRA 2022
Y2 - 23 May 2022 through 27 May 2022
ER -