Abstract
As the need for efficient warehouse logistics has increased in manufacturing systems, the use of automated guided vehicles (AGVs) has also increased to reduce travel time. The AGVs are controlled by a system using laser sensors or floor-embedded wires to transport pallets and their loads. Because such control systems have only predefined palletizing strategies, AGVs may fail to engage incorrectly positioned pallets. In this study, we consider a vision sensor-based method to address this shortcoming by recognizing a pallet’s position. We propose a multi-task deep learning architecture that simultaneously predicts distances and rotation based on images obtained from a visionary sensor. These predictions complement each other in learning, allowing a multi-task model to learn and execute tasks impossible with single-task models. The proposed model can accurately predict the rotation and displacement of the pallets to derive information necessary for the control system. This information can be used to optimize a palletizing strategy. The superiority of the proposed model was verified by an experiment on images of stored pallets that were collected from a visionary sensor attached to an AGV.
Original language | English |
---|---|
Article number | 11808 |
Journal | Applied Sciences (Switzerland) |
Volume | 11 |
Issue number | 24 |
DOIs | |
Publication status | Published - 2021 Dec 2 |
Keywords
- AGV
- Deep learning
- Forklift
- Multi-task learning
ASJC Scopus subject areas
- Materials Science(all)
- Instrumentation
- Engineering(all)
- Process Chemistry and Technology
- Computer Science Applications
- Fluid Flow and Transfer Processes