Since assembly tasks are frequently performed in a wide range of industries, there have been many efforts to develop robotic assembly strategies. However, robotic assembly is only applicable in structured environments wherein a target object is placed in a fixed position, because the occurrence of a large error substantially degrades performance. Thus, there is still a need for a generalized assembly strategy that can cope with a large position/orientation error regardless of the shape. To this end, this study presents an assembly strategy based on both the force and visual information. Specifically, the trajectory of the robot is obtained by combining the output of two neural-network-based trajectory generators that receive the force and image information, respectively, and then a deep reinforcement learning algorithm is applied to obtain the optimal strategy. In this process, imitation learning is applied to train the force-based network using the demonstration data collected with the suggested hand-guiding method, and the probability distribution of the feature is introduced in the image-based network to enable a robot to quickly adapt to assembly parts with different shapes. The performance of the proposed assembly strategy is experimentally verified using various peg-in-hole tasks, and the results confirm that the robot can successfully accomplish an assembly task regardless of the shapes of the assembly parts, even when the initial position/orientation error is large.
Bibliographical noteFunding Information:
This research was supported by the Ministry of Trade, Industry & Energy (Korea) under the Industrial Foundation Technology Development Program (No. 20008613 ).
© 2023 Elsevier B.V.
- Learning-based method
- Robotic assembly
ASJC Scopus subject areas
- Control and Systems Engineering
- General Mathematics
- Computer Science Applications