Header menu link for other important links
X
Task failure resilience technique for improving the performance of MapReduce in Hadoop
C. Kavitha,
Published in John Wiley and Sons Inc
2020
Volume: 42
   
Issue: 5
Pages: 751 - 763
Abstract
MapReduce is a framework that can process huge datasets in parallel and distributed computing environments. However, a single machine failure during the runtime of MapReduce tasks can increase completion time by 50%. MapReduce handles task failures by restarting the failed task and re-computing all input data from scratch, regardless of how much data had already been processed. To solve this issue, we need the computed key-value pairs to persist in a storage system to avoid re-computing them during the restarting process. In this paper, the task failure resilience (TFR) technique is proposed, which allows the execution of a failed task to continue from the point it was interrupted without having to redo all the work. Amazon ElastiCache for Redis is used as a non-volatile cache for the key-value pairs. We measured the performance of TFR by running different Hadoop benchmarking suites. TFR was implemented using the Hadoop software framework, and the experimental results showed significant performance improvements when compared with the performance of the default Hadoop implementation. © 2020 ETRI
About the journal
JournalData powered by TypesetETRI Journal
PublisherData powered by TypesetJohn Wiley and Sons Inc
ISSN12256463