Header menu link for other important links
X
A Detailed Review on the Prominent Compression Methods Used for Reducing the Data Volume of Big Data
, S. Bhuvaneswari
Published in Springer Science and Business Media Deutschland GmbH
2016
Volume: 3
   
Issue: 1
Pages: 47 - 62
Abstract
The volume of Big data is the primary challenge faced by today’s electronic world. Compressing data should be an important aspect of the huge volume to improve the overall performance of the Big data management system and Big data analytics. There is a quiet few compression methods that can reduce the cost of data management and data transfer and improve efficiency of data analysis. Adaptive data compression approach finds out the suitable data compression technique and the location of the data compression. De-duplication removes duplicate data from the Big data store. Resemblance detection and elimination algorithm uses two techniques namely, Dup-Adj and improved super-feature approach. Using them the similar data chunks are separated from non-similar data chunks. The Delta compression is also used to compress the data before storage. The general compression algorithms are computationally complex and also degrade the application response time. To address this application-specific ZIP-IO framework for FPGA accelerated compression is studied. In this framework a simple instruction trace entropy compression algorithm is implemented in FPGA substrate. The Record-aware Compression (RaC) technique guarantees that the splitting of compressed data blocks does not contain partial records in the data blocks and it is implemented in Hadoop MapReduce. © 2016, Springer-Verlag Berlin Heidelberg.
About the journal
JournalData powered by TypesetAnnals of Data Science
PublisherData powered by TypesetSpringer Science and Business Media Deutschland GmbH
ISSN21985804
Open AccessNo