273 lines
13 KiB
HTML
273 lines
13 KiB
HTML
|
<!doctype html>
|
||
|
<html>
|
||
|
<head>
|
||
|
<meta charset="utf-8">
|
||
|
<meta http-equiv="X-UA-Compatible" content="chrome=1">
|
||
|
<title>Pcompress by moinakg</title>
|
||
|
<link rel="stylesheet" href="stylesheets/styles.css">
|
||
|
<link rel="stylesheet" href="stylesheets/pygment_trac.css">
|
||
|
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js"></script>
|
||
|
<script src="javascripts/main.js"></script>
|
||
|
<!--[if lt IE 9]>
|
||
|
<script src="//html5shiv.googlecode.com/svn/trunk/html5.js"></script>
|
||
|
<![endif]-->
|
||
|
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no">
|
||
|
|
||
|
</head>
|
||
|
<body>
|
||
|
|
||
|
<header>
|
||
|
<h1>Pcompress</h1>
|
||
|
<p>A Parallel Compression and Deduplication utility</p>
|
||
|
</header>
|
||
|
|
||
|
<div id="banner">
|
||
|
<span id="logo"></span>
|
||
|
|
||
|
<a href="https://github.com/moinakg/pcompress" class="button fork"><strong>View On GitHub</strong></a>
|
||
|
<div class="downloads">
|
||
|
<span>Downloads:</span>
|
||
|
<ul>
|
||
|
<li><a href="https://github.com/moinakg/pcompress/zipball/master" class="button">ZIP</a></li>
|
||
|
<li><a href="https://github.com/moinakg/pcompress/tarball/master" class="button">TAR</a></li>
|
||
|
</ul>
|
||
|
</div>
|
||
|
</div><!-- end banner -->
|
||
|
|
||
|
<div class="wrapper">
|
||
|
<nav>
|
||
|
<ul></ul>
|
||
|
</nav>
|
||
|
<section>
|
||
|
<h1>Pcompress</h1>
|
||
|
|
||
|
<p>Copyright (C) 2012 Moinak Ghosh. All rights reserved.
|
||
|
Use is subject to license terms.
|
||
|
moinakg (_at) gma1l _dot com.
|
||
|
Comments, suggestions, code, rants etc are welcome.</p>
|
||
|
|
||
|
<p>Pcompress is a utility to do compression and decompression in parallel by
|
||
|
splitting input data into chunks. It has a modular structure and includes
|
||
|
support for multiple algorithms like LZMA, Bzip2, PPMD, etc, with SKEIN
|
||
|
checksums for data integrity. It can also do Lempel-Ziv pre-compression
|
||
|
(derived from libbsc) to improve compression ratios across the board. SSE
|
||
|
optimizations for the bundled LZMA are included. It also implements
|
||
|
chunk-level Content-Aware Deduplication and Delta Compression features
|
||
|
based on a Semi-Rabin Fingerprinting scheme. Delta Compression is done
|
||
|
via the widely popular bsdiff algorithm. Similarity is detected using a
|
||
|
custom hashing of maximal features of a block. When doing chunk-level
|
||
|
dedupe it attempts to merge adjacent non-duplicate blocks index entries
|
||
|
into a single larger entry to reduce metadata. In addition to all these it
|
||
|
can internally split chunks at rabin boundaries to help dedupe and
|
||
|
compression.</p>
|
||
|
|
||
|
<p>It has low metadata overhead and overlaps I/O and compression to achieve
|
||
|
maximum parallelism. It also bundles a simple slab allocator to speed
|
||
|
repeated allocation of similar chunks. It can work in pipe mode, reading
|
||
|
from stdin and writing to stdout. It also provides some adaptive compression
|
||
|
modes in which multiple algorithms are tried per chunk to determine the best
|
||
|
one for the given chunk. Finally it supports 14 compression levels to allow
|
||
|
for ultra compression modes in some algorithms.</p>
|
||
|
|
||
|
<p>Pcompress also supports encryption via AES and uses Scrypt from Tarsnap
|
||
|
for Password Based Key generation.</p>
|
||
|
|
||
|
<p>NOTE: This utility is Not an archiver. It compresses only single files or
|
||
|
datastreams. To archive use something else like tar, cpio or pax.</p>
|
||
|
|
||
|
<h1>Usage</h1>
|
||
|
|
||
|
<pre><code>To compress a file:
|
||
|
pcompress -c <algorithm> [-l <compress level>] [-s <chunk size>] <file>
|
||
|
Where <algorithm> can be the folowing:
|
||
|
lzfx - Very fast and small algorithm based on LZF.
|
||
|
lz4 - Ultra fast, high-throughput algorithm reaching RAM B/W at level1.
|
||
|
zlib - The base Zlib format compression (not Gzip).
|
||
|
lzma - The LZMA (Lempel-Ziv Markov) algorithm from 7Zip.
|
||
|
lzmaMt - Multithreaded version of LZMA. This is a faster version but
|
||
|
uses more memory for the dictionary. Thread count is balanced
|
||
|
between chunk processing threads and algorithm threads.
|
||
|
bzip2 - Bzip2 Algorithm from libbzip2.
|
||
|
ppmd - The PPMd algorithm excellent for textual data. PPMd requires
|
||
|
at least 64MB X CPUs more memory than the other modes.
|
||
|
|
||
|
libbsc - A Block Sorting Compressor using the Burrows Wheeler Transform
|
||
|
like Bzip2 but runs faster and gives better compression than
|
||
|
Bzip2 (See: libbsc.com).
|
||
|
|
||
|
adapt - Adaptive mode where ppmd or bzip2 will be used per chunk,
|
||
|
depending on heuristics. If at least 50% of the input data is
|
||
|
7-bit text then PPMd will be used otherwise Bzip2.
|
||
|
adapt2 - Adaptive mode which includes ppmd and lzma. If at least 80% of
|
||
|
the input data is 7-bit text then PPMd will be used otherwise
|
||
|
LZMA. It has significantly more memory usage than adapt.
|
||
|
none - No compression. This is only meaningful with -D and -E so Dedupe
|
||
|
can be done for post-processing with an external utility.
|
||
|
<chunk_size> - This can be in bytes or can use the following suffixes:
|
||
|
g - Gigabyte, m - Megabyte, k - Kilobyte.
|
||
|
Larger chunks produce better compression at the cost of memory.
|
||
|
<compress_level> - Can be a number from 0 meaning minimum and 14 meaning
|
||
|
maximum compression.
|
||
|
</code></pre>
|
||
|
|
||
|
<p>NOTE: The option "libbsc" uses Ilya Grebnov's block sorting compression library
|
||
|
from <a href="http://libbsc.com/">http://libbsc.com/</a> . It is only available if pcompress in built with
|
||
|
that library. See INSTALL file for details.</p>
|
||
|
|
||
|
<pre><code>To decompress a file compressed using above command:
|
||
|
pcompress -d <compressed file> <target file>
|
||
|
|
||
|
To operate as a pipe, read from stdin and write to stdout:
|
||
|
pcompress -p ...
|
||
|
|
||
|
Attempt Rabin fingerprinting based deduplication on chunks:
|
||
|
pcompress -D ...
|
||
|
pcompress -D -r ... - Do NOT split chunks at a rabin boundary. Default
|
||
|
is to split.
|
||
|
|
||
|
Perform Delta Encoding in addition to Identical Dedup:
|
||
|
pcompress -E ... - This also implies '-D'. This performs Delta Compression
|
||
|
between 2 blocks if they are 40% to 60% similar. The
|
||
|
similarity %age is selected based on the dedupe block
|
||
|
size to balance performance and effectiveness.
|
||
|
pcompress -EE .. - This causes Delta Compression to happen if 2 blocks are
|
||
|
at least 40% similar regardless of block size. This can
|
||
|
effect greater final compression ratio at the cost of
|
||
|
higher processing overhead.
|
||
|
|
||
|
Number of threads can optionally be specified: -t <1 - 256 count>
|
||
|
Other flags:
|
||
|
'-L' - Enable LZP pre-compression. This improves compression ratio of all
|
||
|
algorithms with some extra CPU and very low RAM overhead. Using
|
||
|
delta encoding in conjunction with this may not always be beneficial.
|
||
|
'-S' <cksum>
|
||
|
- Specify chunk checksum to use: CRC64, SKEIN256, SKEIN512, SHA256 and
|
||
|
SHA512. Default one is SKEIN256. The implementation actually uses SKEIN
|
||
|
512-256. This is 25% slower than simple CRC64 but is many times more
|
||
|
robust than CRC64 in detecting data integrity errors. SKEIN is a
|
||
|
finalist in the NIST SHA-3 standard selection process and is one of
|
||
|
the fastest in the group, especially on x86 platforms. BLAKE is faster
|
||
|
than SKEIN on a few platforms.
|
||
|
SKEIN 512-256 is about 60% faster than SHA 512-256 on x64 platforms.
|
||
|
|
||
|
'-F' - Perform Fixed Block Deduplication. This is faster than fingerprinting
|
||
|
based content-aware deduplication in some cases. However this is mostly
|
||
|
usable for disk dumps especially virtual machine images. This generally
|
||
|
gives lower dedupe ratio than content-aware dedupe (-D) and does not
|
||
|
support delta compression.
|
||
|
'-M' - Display memory allocator statistics
|
||
|
'-C' - Display compression statistics
|
||
|
</code></pre>
|
||
|
|
||
|
<p>NOTE: It is recommended not to use '-L' with libbsc compression since libbsc uses
|
||
|
LZP internally as well.</p>
|
||
|
|
||
|
<pre><code>Encryption flags:
|
||
|
'-e' Encrypt chunks with AES. The password can be prompted from the user
|
||
|
or read from a file. Whether 128-Bit or 256-Bit keys are used depends
|
||
|
on how the pcompress binary was built. Default build uses 128-Bit keys.
|
||
|
Unique keys are generated every time pcompress is run even when giving
|
||
|
the same password. Of course enough info is stored in the compressed
|
||
|
file so that the key used for the file can be re-created given the
|
||
|
correct password.
|
||
|
|
||
|
The Scrypt algorithm from Tarsnap is used
|
||
|
(See: http://www.tarsnap.com/scrypt.html) for generating keys from
|
||
|
passwords. The CTR mode AES mechanism from Tarsnap is also utilized.
|
||
|
|
||
|
'-w <pathname>'
|
||
|
Provide a file which contains the encryption password. This file must
|
||
|
be readable and writable since it is zeroed out after the password is
|
||
|
read.
|
||
|
</code></pre>
|
||
|
|
||
|
<p>NOTE: When using pipe-mode via -p the only way to provide a password is to use '-w'.</p>
|
||
|
|
||
|
<h1>Environment Variables</h1>
|
||
|
|
||
|
<p>Set ALLOCATOR_BYPASS=1 in the environment to avoid using the the built-in
|
||
|
allocator. Due to the the way it rounds up an allocation request to the nearest
|
||
|
slab the built-in allocator can allocate extra unused memory. In addition you
|
||
|
may want to use a different allocator in your environment.</p>
|
||
|
|
||
|
<h1>Examples</h1>
|
||
|
|
||
|
<p>Compress "file.tar" using bzip2 level 6, 64MB chunk size and use 4 threads. In
|
||
|
addition perform identity deduplication and delta compression prior to compression.</p>
|
||
|
|
||
|
<pre><code>pcompress -D -E -c bzip2 -l6 -s64m -t4 file.tar
|
||
|
</code></pre>
|
||
|
|
||
|
<p>Compress "file.tar" using extreme compression mode of LZMA and a chunk size of
|
||
|
of 1GB. Allow pcompress to detect the number of CPU cores and use as many threads.</p>
|
||
|
|
||
|
<pre><code>pcompress -c lzma -l14 -s1g file.tar
|
||
|
</code></pre>
|
||
|
|
||
|
<p>Compress "file.tar" using lz4 at max compression with LZ-Prediction pre-processing
|
||
|
and encryption enabled. Chunksize is 100M:</p>
|
||
|
|
||
|
<pre><code>pcompress -c lz4 -l3 -e -L -s100m file.tar
|
||
|
</code></pre>
|
||
|
|
||
|
<h1>Compression Algorithms</h1>
|
||
|
|
||
|
<p>LZFX - Ultra Fast, average compression. This algorithm is the fastest overall.
|
||
|
Levels: 1 - 5
|
||
|
LZ4 - Very Fast, better compression than LZFX.
|
||
|
Levels: 1 - 3
|
||
|
Zlib - Fast, better compression.
|
||
|
Levels: 1 - 9
|
||
|
Bzip2 - Slow, much better compression than Zlib.
|
||
|
Levels: 1 - 9</p>
|
||
|
|
||
|
<p>LZMA - Very slow. Extreme compression.
|
||
|
Levels: 1 - 14
|
||
|
Till level 9 it is standard LZMA parameters. Levels 10 - 12 use
|
||
|
more memory and higher match iterations so are slower. Levels
|
||
|
13 and 14 use larger dictionaries upto 256MB and really suck up
|
||
|
RAM. Use these levels only if you have at the minimum 4GB RAM on
|
||
|
your system.</p>
|
||
|
|
||
|
<p>PPMD - Slow. Extreme compression for Text, average compression for binary.
|
||
|
In addition PPMD decompression time is also high for large chunks.
|
||
|
This requires lots of RAM similar to LZMA.
|
||
|
Levels: 1 - 14.</p>
|
||
|
|
||
|
<p>Adapt - Very slow synthetic mode. Both Bzip2 and PPMD are tried per chunk and
|
||
|
better result selected.
|
||
|
Levels: 1 - 14
|
||
|
Adapt2 - Ultra slow synthetic mode. Both LZMA and PPMD are tried per chunk and
|
||
|
better result selected. Can give best compression ratio when splitting
|
||
|
file into multiple chunks.
|
||
|
Levels: 1 - 14
|
||
|
Since both LZMA and PPMD are used together memory requirements are
|
||
|
quite extensive especially if you are also using extreme levels above
|
||
|
10. For example with 64MB chunk, Level 14, 2 threads and with or without
|
||
|
dedupe, it uses upto 3.5GB physical RAM and requires 6GB of virtual
|
||
|
memory space.</p>
|
||
|
|
||
|
<p>It is possible for a single chunk to span the entire file if enough RAM is
|
||
|
available. However for adaptive modes to be effective for large files, especially
|
||
|
multi-file archives splitting into chunks is required so that best compression
|
||
|
algorithm can be selected for textual and binary portions.</p>
|
||
|
|
||
|
<h1>Caveats</h1>
|
||
|
|
||
|
<p>This utility is not meant for resource constrained environments. Minimum memory
|
||
|
usage (RES/RSS) with barely meaningful settings is around 10MB. This occurs when
|
||
|
using the minimal LZFX compression algorithm at level 2 with a 1MB chunk size and
|
||
|
running 2 threads.
|
||
|
Normally this utility requires lots of RAM depending on compression algorithm,
|
||
|
compression level, and dedupe being enabled. Larger chunk sizes can give
|
||
|
better compression ratio but at the same time use more RAM.</p>
|
||
|
</section>
|
||
|
<footer>
|
||
|
<p>Project maintained by <a href="https://github.com/moinakg">moinakg</a></p>
|
||
|
<p><small>Hosted on GitHub Pages — Theme by <a href="http://twitter.com/#!/michigangraham">mattgraham</a></small></p>
|
||
|
</footer>
|
||
|
</div>
|
||
|
<!--[if !IE]><script>fixScale(document);</script><![endif]-->
|
||
|
|
||
|
</body>
|
||
|
</html>
|