Enabling Large Batch Size Training for DNN Models Beyond the Memory Limit While Maintaining Performance

Xinyu Piao, Doangjoo Synn, Jooyoung Park, Jong Kook Kim

Research output: Contribution to journalArticlepeer-review

Abstract

Recent deep learning models are difficult to train using a large batch size, because commodity machines may not have enough memory to accommodate both the model and a large data batch size. The batch size is one of the hyper-parameters used in the training model, and it is dependent on and is limited by the target machine memory capacity because the batch size can only fit into the remaining memory after the model is uploaded. Moreover, the data item size is also an important factor because if each data item size is larger then the batch size that can fit into the remaining memory becomes smaller. This paper proposes a method called Micro-Batch Processing (MBP) to address this problem. This method helps deep learning models to train by providing a batch processing method that splits a batch into a size that can fit in the remaining memory and processes them sequentially. After processing the small batches individually, a loss normalization algorithm based on the gradient accumulation is used to maintain the performance. The purpose of our method is to allow deep learning models to train using larger batch sizes that exceed the memory capacity of a system without increasing the memory size or using multiple devices (GPUs).

Original languageEnglish
Pages (from-to)102981-102990
Number of pages10
JournalIEEE Access
Volume11
DOIs
Publication statusPublished - 2023

Bibliographical note

Publisher Copyright:
© 2013 IEEE.

Keywords

  • Deep neural networks
  • batch size
  • memory usage
  • micro-batch
  • mini-batch

ASJC Scopus subject areas

  • General Engineering
  • General Computer Science
  • General Materials Science

Fingerprint

Dive into the research topics of 'Enabling Large Batch Size Training for DNN Models Beyond the Memory Limit While Maintaining Performance'. Together they form a unique fingerprint.

Cite this