Abstract
In recent times, significant progress has been achieved in cost-effective and timely processing of large amounts of data through Hadoop based on the emerging MapReduce framework. Based on these developments, we proposed a Hadoop-based Distributed Video Transcoding System which transcodes large video data sets into specific video formats depending on user-requested options. In order to reduce the transcoding time exponentially, we apply a Hadoop Distributed File System and a MapReduce framework to our system. Hadoop and MapReduce are designed to process petabyte-scale text data in a parallel and distributed manner. However, our system processes multi-media data. In this study, we measure the total transcoding time for various values of five MapReduce tuning parameters: block replication factor, Hadoop Distributed File System block size, Java Virtual Machine reuse option, maximum number of map slots and input/output buffer size. Thus, based on the experimental results, we determine the optimal values of the parameters affecting transcoding processing in order to improve the performance of our Hadoop-based system that processes a large amount of video data. From the results, it is clearly observed that our system exhibits a notable difference in transcoding performance depending on the values of the MapReduce tuning parameters.
Original language | English |
---|---|
Pages (from-to) | 2099-2109 |
Number of pages | 11 |
Journal | Information (Japan) |
Volume | 18 |
Issue number | 5 |
Publication status | Published - 2015 May 1 |
Bibliographical note
Publisher Copyright:© 2015 International Information Institute.
Keywords
- And cloud computing
- Hadoop Optimization
- MapReduce
- Performance tuning
- Video transcoding system
ASJC Scopus subject areas
- Information Systems