TY - GEN
T1 - Bin Picking System using Object Recognition based on Automated Synthetic Dataset Generation
AU - Jo, Hyun Jun
AU - Min, Cheol Hui
AU - Song, Jae Bok
N1 - Funding Information:
* This research was supported by the IITP grant funded by the Korea Government (MSIT). (No. 2018-0-00622) HyunJun Jo is the School of Mechanical Eng., Korea University, Seoul, Korea (email: jhj0630@korea.ac.kr) Cheol-Hui Min is the School of Mechanical Eng., Korea University, Seoul, Korea (email: mch5048@korea.ac.kr) Jae-Bok Song (corresponding author) is a Professor of the School of Mechanical Eng., Korea University, Seoul, Korea (Tel.: +82 2 3290 3363; fax: +82 2 3290 3757; e-mail: jbsong@korea.ac.kr).
Publisher Copyright:
© 2018 IEEE.
PY - 2018/8/20
Y1 - 2018/8/20
N2 - Recently, deep learning has been increasingly used for robot-based object grasping. Since it is important to accurately recognize the position of objects to be grasped, it is often attempted to grasp the objects through the deep learning scheme which is known to show high object recognition performance. However, large datasets are required for object recognition using deep learning and in order to create a dataset, all data must be collected and needs to be manually annotated. It is a simple task, but takes a lot of time and labor. Therefore, this study reduced the amount of required data and minimized the resources and human effort required for dataset generation through image synthesis method and automatic annotation. Faster R-CNN, the object recognition algorithm, was trained on the generated dataset and recognized the position of the object on the image. The position of an object in an image can be converted to its position in the robot coordinate system using the camera intrinsic parameters which can be obtained by camera calibration. Therefore., the robot can move to the converted position of object on the robot coordinate system to grasp the object. Experiments show that bin picking can be conducted successfully in this way.
AB - Recently, deep learning has been increasingly used for robot-based object grasping. Since it is important to accurately recognize the position of objects to be grasped, it is often attempted to grasp the objects through the deep learning scheme which is known to show high object recognition performance. However, large datasets are required for object recognition using deep learning and in order to create a dataset, all data must be collected and needs to be manually annotated. It is a simple task, but takes a lot of time and labor. Therefore, this study reduced the amount of required data and minimized the resources and human effort required for dataset generation through image synthesis method and automatic annotation. Faster R-CNN, the object recognition algorithm, was trained on the generated dataset and recognized the position of the object on the image. The position of an object in an image can be converted to its position in the robot coordinate system using the camera intrinsic parameters which can be obtained by camera calibration. Therefore., the robot can move to the converted position of object on the robot coordinate system to grasp the object. Experiments show that bin picking can be conducted successfully in this way.
UR - http://www.scopus.com/inward/record.url?scp=85053543792&partnerID=8YFLogxK
U2 - 10.1109/URAI.2018.8441811
DO - 10.1109/URAI.2018.8441811
M3 - Conference contribution
AN - SCOPUS:85053543792
SN - 9781538663349
T3 - 2018 15th International Conference on Ubiquitous Robots, UR 2018
SP - 886
EP - 890
BT - 2018 15th International Conference on Ubiquitous Robots, UR 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 15th International Conference on Ubiquitous Robots, UR 2018
Y2 - 27 June 2018 through 30 June 2018
ER -