pcompress/index.html
2012-11-26 06:20:08 -08:00

253 lines
No EOL
15 KiB
HTML

<!doctype html>
<html>
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="chrome=1">
<title>Pcompress by moinakg</title>
<link rel="stylesheet" href="stylesheets/styles.css">
<link rel="stylesheet" href="stylesheets/pygment_trac.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js"></script>
<script src="javascripts/main.js"></script>
<!--[if lt IE 9]>
<script src="//html5shiv.googlecode.com/svn/trunk/html5.js"></script>
<![endif]-->
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no">
</head>
<body>
<header>
<h1>Pcompress</h1>
<p>A Parallel Compression and Deduplication utility</p>
</header>
<div id="banner">
<span id="logo"></span>
<a href="https://github.com/moinakg/pcompress" class="button fork"><strong>View On GitHub</strong></a>
<div class="downloads">
<span>Downloads:</span>
<ul>
<li><a href="https://github.com/moinakg/pcompress/zipball/master" class="button">ZIP</a></li>
<li><a href="https://github.com/moinakg/pcompress/tarball/master" class="button">TAR</a></li>
</ul>
</div>
</div><!-- end banner -->
<div class="wrapper">
<nav>
<ul></ul>
</nav>
<section>
<h1>Introduction</h1>
<p>Pcompress is an attempt to revisit <strong>Data Compression</strong> using unique combinations of existing and some new techniques. Both high compression ratio and performance are key goals along with the ability to leverage all the cores on a multi-core CPU. It also aims to bring to the table scalable, high-throughput Global <strong>Deduplication</strong> of archival storage. The deduplication capability is also available for single-file compression modes providing very interesting capabilities. Other projects providing some of these features include <a href="http://ck.kolivas.org/apps/lrzip/">Lrzip</a>, <a href="http://www.exdupe.com/">eXdupe</a>. Full archivers providing some of the similar features include the excellent <a href="http://freearc.org/">FreeArc</a> and <a href="http://peazip.sourceforge.net/">PeaZIP</a>. Pcompress is not an archiver but provides a unique combination of features to both maximize compression ratio and provide high speed.</p>
<p>Pcompress can do both compression and decompression in parallel by splitting input data into chunks. It has a modular structure and includes support for multiple algorithms like LZMA, Bzip2, PPMD, etc, with SKEIN/SHA checksums for data integrity. It can also do Lempel-Ziv-Prediction pre-compression (derived from libbsc) to improve compression ratios across the board. SSE optimizations for the bundled LZMA are included. It also implements chunk-level Content-Aware Deduplication and Delta Compression features
based on a rolling hash algorithm derived from the Rabin Fingerprinting approach. Other open-source deduplication software like <a href="http://opendedup.org/">OpenDedup</a> and <a href="http://www.lessfs.com/wordpress/">LessFS</a> use fixed block dedupe while <a href="http://backuppc.sourceforge.net/">BackupPC</a> does file-level dedupe only (single-instance storage). Of course OpenDedup and LessFS are Fuse based filesystems doing inline dedupe of primary storage while Pcompress is only meant for archival storage as of today.</p>
<p>Delta Compression is implemented via the widely popular bsdiff algorithm. Chunk Similarity is detected using an adaptation of <a href="http://en.wikipedia.org/wiki/MinHash">MinHashing</a>. It has low metadata overhead and overlaps I/O and compression to achieve maximum parallelism. It also bundles a simple mempool allocator to speed repeated allocation of similar chunks. It can work in pipe mode, reading from stdin and writing to stdout. It also provides adaptive compression modes in which some simple data heuristics are applied in an attempt to select a near-optimal algorithm per chunk.</p>
<p>Pcompress also supports encryption via AES and uses Scrypt from <a href="http://www.tarsnap.com/">Tarsnap</a> for secure Password Based Key generation.</p>
<p>NOTE: This utility is Not an archiver. It compresses only single files or datastreams. To archive use something else like tar, cpio or pax.</p>
<h1>NEWS</h1>
<p>Blog: <a href="https://moinakg.wordpress.com/tag/pcompress/">https://moinakg.wordpress.com/tag/pcompress/</a>.</p>
<p>Releases: <a href="http://freecode.com/projects/pcompress">http://freecode.com/projects/pcompress</a></p>
<h1>Usage</h1>
<pre><code>To compress a file:
pcompress -c &lt;algorithm&gt; [-l &lt;compress level&gt;] [-s &lt;chunk size&gt;] &lt;file&gt;
Where &lt;algorithm&gt; can be the folowing:
lzfx - Very fast and small algorithm based on LZF.
lz4 - Ultra fast, high-throughput algorithm reaching RAM B/W at level1.
zlib - The base Zlib format compression (not Gzip).
lzma - The LZMA (Lempel-Ziv Markov) algorithm from 7Zip.
lzmaMt - Multithreaded version of LZMA. This is a faster version but
uses more memory for the dictionary. Thread count is balanced
between chunk processing threads and algorithm threads.
bzip2 - Bzip2 Algorithm from libbzip2.
ppmd - The PPMd algorithm excellent for textual data. PPMd requires
at least 64MB X CPUs more memory than the other modes.
libbsc - A Block Sorting Compressor using the Burrows Wheeler Transform
like Bzip2 but runs faster and gives better compression than
Bzip2 (See: libbsc.com).
adapt - Adaptive mode where ppmd or bzip2 will be used per chunk,
depending on heuristics. If at least 50% of the input data is
7-bit text then PPMd will be used otherwise Bzip2.
adapt2 - Adaptive mode which includes ppmd and lzma. If at least 80% of
the input data is 7-bit text then PPMd will be used otherwise
LZMA. It has significantly more memory usage than adapt.
none - No compression. This is only meaningful with -D and -E so Dedupe
can be done for post-processing with an external utility.
&lt;chunk_size&gt; - This can be in bytes or can use the following suffixes:
g - Gigabyte, m - Megabyte, k - Kilobyte.
Larger chunks produce better compression at the cost of memory.
&lt;compress_level&gt; - Can be a number from 0 meaning minimum and 14 meaning
maximum compression.
</code></pre>
<p>NOTE: The option "libbsc" uses Ilya Grebnov's block sorting compression library
from <a href="http://libbsc.com/">http://libbsc.com/</a> . It is only available if pcompress in built with
that library. See INSTALL file for details.</p>
<pre><code>To decompress a file compressed using above command:
pcompress -d &lt;compressed file&gt; &lt;target file&gt;
To operate as a pipe, read from stdin and write to stdout:
pcompress -p ...
Attempt Rabin fingerprinting based deduplication on chunks:
pcompress -D ...
pcompress -D -r ... - Do NOT split chunks at a rabin boundary. Default
is to split.
Perform Delta Encoding in addition to Identical Dedup:
pcompress -E ... - This also implies '-D'. This performs Delta Compression
between 2 blocks if they are 40% to 60% similar. The
similarity %age is selected based on the dedupe block
size to balance performance and effectiveness.
pcompress -EE .. - This causes Delta Compression to happen if 2 blocks are
at least 40% similar regardless of block size. This can
effect greater final compression ratio at the cost of
higher processing overhead.
Number of threads can optionally be specified: -t &lt;1 - 256 count&gt;
Other flags:
'-L' - Enable LZP pre-compression. This improves compression ratio of all
algorithms with some extra CPU and very low RAM overhead. Using
delta encoding in conjunction with this may not always be beneficial.
'-S' &lt;cksum&gt;
- Specify chunk checksum to use: CRC64, SKEIN256, SKEIN512, SHA256 and
SHA512. Default one is SKEIN256. The implementation actually uses SKEIN
512-256. This is 25% slower than simple CRC64 but is many times more
robust than CRC64 in detecting data integrity errors. SKEIN is a
finalist in the NIST SHA-3 standard selection process and is one of
the fastest in the group, especially on x86 platforms. BLAKE is faster
than SKEIN on a few platforms.
SKEIN 512-256 is about 60% faster than SHA 512-256 on x64 platforms.
'-F' - Perform Fixed Block Deduplication. This is faster than fingerprinting
based content-aware deduplication in some cases. However this is mostly
usable for disk dumps especially virtual machine images. This generally
gives lower dedupe ratio than content-aware dedupe (-D) and does not
support delta compression.
'-M' - Display memory allocator statistics
'-C' - Display compression statistics
</code></pre>
<p>NOTE: It is recommended not to use '-L' with libbsc compression since libbsc uses
LZP internally as well.</p>
<pre><code>Encryption flags:
'-e' Encrypt chunks with AES. The password can be prompted from the user
or read from a file. Whether 128-Bit or 256-Bit keys are used depends
on how the pcompress binary was built. Default build uses 128-Bit keys.
Unique keys are generated every time pcompress is run even when giving
the same password. Of course enough info is stored in the compressed
file so that the key used for the file can be re-created given the
correct password.
The Scrypt algorithm from Tarsnap is used
(See: http://www.tarsnap.com/scrypt.html) for generating keys from
passwords. The CTR mode AES mechanism from Tarsnap is also utilized.
'-w &lt;pathname&gt;'
Provide a file which contains the encryption password. This file must
be readable and writable since it is zeroed out after the password is
read.
</code></pre>
<p>NOTE: When using pipe-mode via -p the only way to provide a password is to use '-w'.</p>
<h1>Environment Variables</h1>
<p>Set ALLOCATOR_BYPASS=1 in the environment to avoid using the the built-in allocator. Due to the the way it rounds up an allocation request to the nearest slab the built-in allocator can allocate extra unused memory. In addition you may want to use a different allocator in your environment.</p>
<h1>Examples</h1>
<p>Compress "file.tar" using bzip2 level 6, 64MB chunk size and use 4 threads. In addition perform identity deduplication and delta compression prior to compression.</p>
<pre><code>pcompress -D -E -c bzip2 -l6 -s64m -t4 file.tar
</code></pre>
<p>Compress "file.tar" using extreme compression mode of LZMA and a chunk size of of 1GB. Allow pcompress to detect the number of CPU cores and use as many threads.</p>
<pre><code>pcompress -c lzma -l14 -s1g file.tar
</code></pre>
<p>Compress "file.tar" using lz4 at max compression with LZ-Prediction pre-processing and encryption enabled. Chunksize is 100M:</p>
<pre><code>pcompress -c lz4 -l3 -e -L -s100m file.tar
</code></pre>
<h1>Compression Algorithms</h1>
<p>LZFX - Ultra Fast, average compression. This algorithm is the fastest overall.
Levels: 1 - 5
LZ4 - Very Fast, better compression than LZFX.
Levels: 1 - 3
Zlib - Fast, better compression.
Levels: 1 - 9
Bzip2 - Slow, much better compression than Zlib.
Levels: 1 - 9</p>
<p>LZMA - Very slow. Extreme compression.
Levels: 1 - 14
Till level 9 it is standard LZMA parameters. Levels 10 - 12 use
more memory and higher match iterations so are slower. Levels
13 and 14 use larger dictionaries upto 256MB and really suck up
RAM. Use these levels only if you have at the minimum 4GB RAM on
your system.</p>
<p>PPMD - Slow. Extreme compression for Text, average compression for binary.
In addition PPMD decompression time is also high for large chunks.
This requires lots of RAM similar to LZMA.
Levels: 1 - 14.</p>
<p>Adapt - Very slow synthetic mode. Both Bzip2 and PPMD are tried per chunk and
better result selected.
Levels: 1 - 14
Adapt2 - Ultra slow synthetic mode. Both LZMA and PPMD are tried per chunk and
better result selected. Can give best compression ratio when splitting
file into multiple chunks.
Levels: 1 - 14
Since both LZMA and PPMD are used together memory requirements are
quite extensive especially if you are also using extreme levels above
10. For example with 64MB chunk, Level 14, 2 threads and with or without
dedupe, it uses upto 3.5GB physical RAM and requires 6GB of virtual
memory space.</p>
<p>It is possible for a single chunk to span the entire file if enough RAM is available. However for adaptive modes to be effective for large files, especially multi-file archives splitting into chunks is required so that best compression algorithm can be selected for textual and binary portions.</p>
<h1>Caveats</h1>
<p>This utility is not meant for resource constrained environments. Minimum memory usage (RES/RSS) with barely meaningful settings is around 10MB. This occurs when using the minimal LZFX compression algorithm at level 2 with a 1MB chunk size and running 2 threads.</p>
<p>Normally this utility requires lots of RAM depending on compression algorithm, compression level, and dedupe being enabled. Larger chunk sizes can give better compression ratio but at the same time use more RAM.</p>
</section>
<footer>
<p>Project maintained by <a href="https://github.com/moinakg">moinakg</a></p>
<p><small>Hosted on GitHub Pages &mdash; Theme by <a href="http://twitter.com/#!/michigangraham">mattgraham</a></small></p>
</footer>
</div>
<!--[if !IE]><script>fixScale(document);</script><![endif]-->
<script type="text/javascript">
var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E"));
</script>
<script type="text/javascript">
try {
var pageTracker = _gat._getTracker("UA-36422648-1");
pageTracker._trackPageview();
} catch(err) {}
</script>
</body>
</html>