Header menu link for other important links
X
Enhancing performance of map reduce workflow through H2HADOOP: CJBT
, V. Lella, S.M. Avula
Published in Blue Eyes Intelligence Engineering and Sciences Publication
2019
Volume: 7
   
Issue: 6
Pages: 652 - 656
Abstract
Distributed computing uses Hadoop framework for handling Big Data in parallel. Hadoop’s confinements can be exploited to execute various activities proficiently. These confinements are for the most part an immediate consequence of locality of data in the cluster, scheduling of tasks and various jobs and allocation of resources in Hadoop. Productive resource allocation remain as a challenge in Cloud Computing Map Reduce stages. Henceforth, we propose H2Hadoop, which is an enhanced architecture which decreases calculation cost related with Big Data analysis. The proposed framework additionally helps in solving the issue of resource allocation in local Hadoop. H2Hadoop provides a reliable, accurate and far faster solution for “text data”, such as finding DNA sequences and the theme of a DNA succession. Additionally, H2Hadoop gives an effective Data Mining technique for Cloud Computing environment. H2Hadoop design influences on Name Node’s ability to assign jobs to the Task Trackers (Data Nodes) inside the group. Building a metadata table containing information of location of data nodes with required features that are needed when in future a similar job is requested to the job tracker then it should compare the metadata and CJBT for assigning data nodes which were previously assigned instead of storing and reading through whole gathering again. Contrasting with local Hadoop, H2Hadoop diminishes time taken by CPU, number of read tasks, and other Hadoop factors. © Blue Eyes Intelligence Engineering & Sciences Publication.
About the journal
JournalInternational Journal of Recent Technology and Engineering
PublisherBlue Eyes Intelligence Engineering and Sciences Publication
ISSN22773878