2012-07-27 16:33:24 +00:00
|
|
|
Pcompress
|
2012-05-19 15:54:47 +00:00
|
|
|
=========
|
|
|
|
|
2013-05-09 13:23:11 +00:00
|
|
|
Copyright (C) 2012-2013 Moinak Ghosh. All rights reserved.
|
2012-07-27 16:33:24 +00:00
|
|
|
Use is subject to license terms.
|
2012-09-29 18:01:45 +00:00
|
|
|
moinakg (_at) gma1l _dot com.
|
|
|
|
Comments, suggestions, code, rants etc are welcome.
|
2012-07-27 16:33:24 +00:00
|
|
|
|
|
|
|
Pcompress is a utility to do compression and decompression in parallel by
|
|
|
|
splitting input data into chunks. It has a modular structure and includes
|
2012-11-18 17:49:22 +00:00
|
|
|
support for multiple algorithms like LZMA, Bzip2, PPMD, etc, with SKEIN/
|
|
|
|
SHA checksums for data integrity. It can also do Lempel-Ziv pre-compression
|
2012-09-07 16:02:20 +00:00
|
|
|
(derived from libbsc) to improve compression ratios across the board. SSE
|
|
|
|
optimizations for the bundled LZMA are included. It also implements
|
2013-05-09 13:23:11 +00:00
|
|
|
Variable Block Deduplication and Delta Compression features based on a
|
|
|
|
Semi-Rabin Fingerprinting scheme. Delta Compression is done via the widely
|
|
|
|
popular bsdiff algorithm. Similarity is detected using a technique based
|
|
|
|
on MinHashing. When doing Dedupe it attempts to merge adjacent non-
|
|
|
|
duplicate block index entries into a single larger entry to reduce metadata.
|
|
|
|
In addition to all these it can internally split chunks at rabin boundaries
|
|
|
|
to help Dedupe and compression.
|
2012-07-27 16:33:24 +00:00
|
|
|
|
|
|
|
It has low metadata overhead and overlaps I/O and compression to achieve
|
|
|
|
maximum parallelism. It also bundles a simple slab allocator to speed
|
|
|
|
repeated allocation of similar chunks. It can work in pipe mode, reading
|
2012-11-18 17:49:22 +00:00
|
|
|
from stdin and writing to stdout. It also provides adaptive compression
|
|
|
|
modes in which data analysis heuristics are used to identify near-optimal
|
|
|
|
algorithms per chunk. Finally it supports 14 compression levels to allow
|
2012-07-27 16:33:24 +00:00
|
|
|
for ultra compression modes in some algorithms.
|
|
|
|
|
2012-10-15 06:40:00 +00:00
|
|
|
Pcompress also supports encryption via AES and uses Scrypt from Tarsnap
|
2012-11-18 17:49:22 +00:00
|
|
|
for Password Based Key generation. A unique key is generated per session
|
|
|
|
even if the same password is used and HMAC is used to do authentication.
|
2012-10-15 06:40:00 +00:00
|
|
|
|
2012-08-26 09:31:18 +00:00
|
|
|
NOTE: This utility is Not an archiver. It compresses only single files or
|
|
|
|
datastreams. To archive use something else like tar, cpio or pax.
|
|
|
|
|
2013-06-16 15:16:17 +00:00
|
|
|
Links of Interest
|
|
|
|
=================
|
|
|
|
|
2013-06-16 15:42:04 +00:00
|
|
|
Project Home Page: http://moinakg.github.io/pcompress/
|
|
|
|
|
|
|
|
http://moinakg.github.io/pcompress/#deduplication-chunking-analysis
|
|
|
|
|
|
|
|
http://moinakg.github.io/pcompress/#compression-benchmarks
|
|
|
|
|
|
|
|
http://moinakg.wordpress.com/2013/04/26/pcompress-2-0-with-global-deduplication/
|
|
|
|
|
|
|
|
http://moinakg.wordpress.com/2013/03/26/coordinated-parallelism-using-semaphores/
|
|
|
|
|
|
|
|
http://moinakg.wordpress.com/2013/06/11/architecture-for-a-deduplicated-archival-store-part-1/
|
|
|
|
|
|
|
|
http://moinakg.wordpress.com/2013/06/15/architecture-for-a-deduplicated-archival-store-part-2/
|
2013-06-16 15:16:17 +00:00
|
|
|
|
2012-07-27 16:33:24 +00:00
|
|
|
Usage
|
|
|
|
=====
|
|
|
|
|
|
|
|
To compress a file:
|
2013-04-24 17:33:58 +00:00
|
|
|
pcompress -c <algorithm> [-l <compress level>] [-s <chunk size>] <file> [-]
|
2013-04-26 13:02:00 +00:00
|
|
|
|
2012-07-27 16:33:24 +00:00
|
|
|
Where <algorithm> can be the folowing:
|
|
|
|
lzfx - Very fast and small algorithm based on LZF.
|
|
|
|
lz4 - Ultra fast, high-throughput algorithm reaching RAM B/W at level1.
|
|
|
|
zlib - The base Zlib format compression (not Gzip).
|
|
|
|
lzma - The LZMA (Lempel-Ziv Markov) algorithm from 7Zip.
|
2012-08-18 16:30:14 +00:00
|
|
|
lzmaMt - Multithreaded version of LZMA. This is a faster version but
|
|
|
|
uses more memory for the dictionary. Thread count is balanced
|
|
|
|
between chunk processing threads and algorithm threads.
|
2012-07-27 16:33:24 +00:00
|
|
|
bzip2 - Bzip2 Algorithm from libbzip2.
|
|
|
|
ppmd - The PPMd algorithm excellent for textual data. PPMd requires
|
|
|
|
at least 64MB X CPUs more memory than the other modes.
|
2012-08-27 16:21:55 +00:00
|
|
|
|
|
|
|
libbsc - A Block Sorting Compressor using the Burrows Wheeler Transform
|
|
|
|
like Bzip2 but runs faster and gives better compression than
|
|
|
|
Bzip2 (See: libbsc.com).
|
|
|
|
|
2012-07-27 16:33:24 +00:00
|
|
|
adapt - Adaptive mode where ppmd or bzip2 will be used per chunk,
|
2012-09-27 16:59:08 +00:00
|
|
|
depending on heuristics. If at least 50% of the input data is
|
|
|
|
7-bit text then PPMd will be used otherwise Bzip2.
|
|
|
|
adapt2 - Adaptive mode which includes ppmd and lzma. If at least 80% of
|
|
|
|
the input data is 7-bit text then PPMd will be used otherwise
|
|
|
|
LZMA. It has significantly more memory usage than adapt.
|
2012-08-18 16:30:14 +00:00
|
|
|
none - No compression. This is only meaningful with -D and -E so Dedupe
|
|
|
|
can be done for post-processing with an external utility.
|
2013-04-26 13:02:00 +00:00
|
|
|
|
2012-07-27 16:33:24 +00:00
|
|
|
<chunk_size> - This can be in bytes or can use the following suffixes:
|
|
|
|
g - Gigabyte, m - Megabyte, k - Kilobyte.
|
|
|
|
Larger chunks produce better compression at the cost of memory.
|
2013-04-24 17:33:58 +00:00
|
|
|
In case of Global Deduplication (see below) this chunk size is
|
|
|
|
just a hint and may get automatically adjusted.
|
2012-07-27 16:33:24 +00:00
|
|
|
<compress_level> - Can be a number from 0 meaning minimum and 14 meaning
|
|
|
|
maximum compression.
|
2013-04-26 13:16:14 +00:00
|
|
|
'-' - If '-' is given as the final argument then it specifies that
|
2013-04-24 17:33:58 +00:00
|
|
|
compressed output should go to stdout.
|
2012-07-27 16:33:24 +00:00
|
|
|
|
2012-08-27 16:21:55 +00:00
|
|
|
NOTE: The option "libbsc" uses Ilya Grebnov's block sorting compression library
|
|
|
|
from http://libbsc.com/ . It is only available if pcompress in built with
|
|
|
|
that library. See INSTALL file for details.
|
|
|
|
|
2012-07-27 16:33:24 +00:00
|
|
|
To decompress a file compressed using above command:
|
|
|
|
pcompress -d <compressed file> <target file>
|
2013-04-26 13:16:14 +00:00
|
|
|
|
|
|
|
<compressed file> can be '-' to indicate reading from stdin while write goes
|
|
|
|
to <target file>
|
2012-07-27 16:33:24 +00:00
|
|
|
|
2013-04-26 13:16:14 +00:00
|
|
|
To operate as a full pipe, read from stdin and write to stdout:
|
2012-07-27 16:33:24 +00:00
|
|
|
pcompress -p ...
|
|
|
|
|
|
|
|
Attempt Rabin fingerprinting based deduplication on chunks:
|
|
|
|
pcompress -D ...
|
2012-07-28 18:25:24 +00:00
|
|
|
pcompress -D -r ... - Do NOT split chunks at a rabin boundary. Default
|
|
|
|
is to split.
|
2012-07-27 16:33:24 +00:00
|
|
|
|
2012-09-24 16:50:27 +00:00
|
|
|
Perform Delta Encoding in addition to Identical Dedup:
|
|
|
|
pcompress -E ... - This also implies '-D'. This performs Delta Compression
|
2012-09-26 14:17:32 +00:00
|
|
|
between 2 blocks if they are 40% to 60% similar. The
|
|
|
|
similarity %age is selected based on the dedupe block
|
|
|
|
size to balance performance and effectiveness.
|
2012-09-24 16:50:27 +00:00
|
|
|
pcompress -EE .. - This causes Delta Compression to happen if 2 blocks are
|
2012-09-26 14:17:32 +00:00
|
|
|
at least 40% similar regardless of block size. This can
|
|
|
|
effect greater final compression ratio at the cost of
|
|
|
|
higher processing overhead.
|
2012-07-27 16:33:24 +00:00
|
|
|
|
|
|
|
Number of threads can optionally be specified: -t <1 - 256 count>
|
2012-08-23 17:28:44 +00:00
|
|
|
Other flags:
|
|
|
|
'-L' - Enable LZP pre-compression. This improves compression ratio of all
|
|
|
|
algorithms with some extra CPU and very low RAM overhead. Using
|
|
|
|
delta encoding in conjunction with this may not always be beneficial.
|
2012-12-04 18:42:29 +00:00
|
|
|
However Adaptive Delta Encoding is beneficial along with this.
|
|
|
|
|
2012-12-15 16:33:23 +00:00
|
|
|
'-P' - Enable Adaptive Delta Encoding. It can improve compresion ratio further
|
|
|
|
for data containing tables of numerical values especially if those are
|
|
|
|
in an arithmetic series. In this implementation basic Delta Encoding is
|
|
|
|
combined with Run-Length encoding and Matrix transpose
|
2013-01-03 18:10:21 +00:00
|
|
|
NOTE - Both -L and -P can be used together to give maximum benefit on most
|
|
|
|
datasets.
|
2012-12-04 18:42:29 +00:00
|
|
|
|
2012-09-01 09:10:15 +00:00
|
|
|
'-S' <cksum>
|
2013-04-30 14:05:18 +00:00
|
|
|
- Specify chunk checksum to use:
|
|
|
|
|
|
|
|
CRC64 - Extremely Fast 64-bit CRC from LZMA SDK.
|
|
|
|
SHA256 - SHA512/256 version of Intel's optimized (SSE,AVX) SHA2 for x86.
|
|
|
|
SHA512 - SHA512 version of Intel's optimized (SSE,AVX) SHA2 for x86.
|
|
|
|
KECCAK256 - Official 256-bit NIST SHA3 optimized implementation.
|
|
|
|
KECCAK512 - Official 512-bit NIST SHA3 optimized implementation.
|
|
|
|
BLAKE256 - Very fast 256-bit BLAKE2, derived from the NIST SHA3
|
|
|
|
runner-up BLAKE.
|
|
|
|
BLAKE512 - Very fast 256-bit BLAKE2, derived from the NIST SHA3
|
|
|
|
runner-up BLAKE.
|
2012-09-16 05:42:58 +00:00
|
|
|
|
|
|
|
'-F' - Perform Fixed Block Deduplication. This is faster than fingerprinting
|
|
|
|
based content-aware deduplication in some cases. However this is mostly
|
|
|
|
usable for disk dumps especially virtual machine images. This generally
|
|
|
|
gives lower dedupe ratio than content-aware dedupe (-D) and does not
|
|
|
|
support delta compression.
|
2012-12-15 07:12:46 +00:00
|
|
|
|
2013-01-03 18:10:21 +00:00
|
|
|
'-B' <1..5>
|
|
|
|
- Specify an average Dedupe block size. 1 - 4K, 2 - 8K ... 5 - 64K.
|
2012-08-23 17:28:44 +00:00
|
|
|
'-M' - Display memory allocator statistics
|
|
|
|
'-C' - Display compression statistics
|
2012-07-27 16:33:24 +00:00
|
|
|
|
2013-04-24 17:33:58 +00:00
|
|
|
Global Deduplication:
|
|
|
|
'-G' - This flag enables Global Deduplication. This makes pcompress maintain an
|
|
|
|
in-memory index to lookup cryptographic block hashes for duplicates. Once
|
|
|
|
a duplicate is found it is replaced with a reference to the original block.
|
|
|
|
This allows detecting and eliminating duplicate blocks across the entire
|
|
|
|
dataset. In contrast using only '-D' or '-F' flags does deduplication only
|
|
|
|
within the chunk but uses less memory and is much faster than Global Dedupe.
|
|
|
|
|
|
|
|
The '-G' flag can be combined with either '-D' or '-F' flags to indicate
|
|
|
|
rabin chunking or fixed chunking respectively. If these flags are not
|
|
|
|
specified then the default is to assume rabin chunking via '-D'.
|
|
|
|
All other Dedupe flags have the same meanings in this context.
|
|
|
|
|
|
|
|
Delta Encoding is not supported with Global Deduplication at this time. The
|
|
|
|
in-memory hashtable index can use upto 75% of free RAM depending on the size
|
|
|
|
of the dataset. In Pipe mode the index will always use 75% of free RAM since
|
2013-05-06 18:20:56 +00:00
|
|
|
the dataset size is not known. This is the simple full block index mode. If
|
|
|
|
the available RAM is not enough to hold all block checksums then older block
|
|
|
|
entries are discarded automatically from the matching hash slots.
|
2013-04-24 17:33:58 +00:00
|
|
|
|
|
|
|
If pipe mode is not used and the given dataset is a file then Pcompress
|
|
|
|
checks whether the index size will exceed three times of 75% of the available
|
|
|
|
free RAM. In such a case it automatically switches to a Segmented Deduplication
|
|
|
|
mode. Here data is first split into blocks as above. Then upto 2048 blocks are
|
|
|
|
grouped together to form a larger segment. The individual block hashes for a
|
|
|
|
segment are stored on a tempfile on disk. A few min-values hashes are then
|
|
|
|
computed from the block hashes of the segment which are then loaded into the
|
|
|
|
index. These hashes are used to detect segments that are approximately similar
|
|
|
|
to each other. Once found the block hashes of the matching segments are loaded
|
|
|
|
from the temp file and actual deduplication is performed. This allows the
|
|
|
|
in-memory index size to be approximately 0.0025% of the total dataset size and
|
|
|
|
requires very few disk reads for every 2048 blocks processed.
|
|
|
|
|
|
|
|
In pipe mode Global Deduplication always uses a segmented similarity based
|
|
|
|
index. It allows efficient network transfer of large data.
|
|
|
|
|
2012-10-15 06:40:00 +00:00
|
|
|
Encryption flags:
|
2013-03-05 15:37:54 +00:00
|
|
|
'-e <ALGO>'
|
|
|
|
Encrypt chunks using the given encryption algorithm. The algo parameter
|
|
|
|
can be one of AES or SALSA20. Both are used in CTR stream encryption
|
|
|
|
mode.
|
|
|
|
The password can be prompted from the user or read from a file. Unique
|
|
|
|
keys are generated every time pcompress is run even when giving the same
|
|
|
|
password. Of course enough info is stored in the compresse file so that
|
|
|
|
the key used for the file can be re-created given the correct password.
|
|
|
|
|
|
|
|
Default key length if 256 bits but can be reduced to 128 bits using the
|
|
|
|
'-k' option.
|
2012-10-15 06:40:00 +00:00
|
|
|
|
|
|
|
The Scrypt algorithm from Tarsnap is used
|
|
|
|
(See: http://www.tarsnap.com/scrypt.html) for generating keys from
|
|
|
|
passwords. The CTR mode AES mechanism from Tarsnap is also utilized.
|
|
|
|
|
|
|
|
'-w <pathname>'
|
|
|
|
Provide a file which contains the encryption password. This file must
|
|
|
|
be readable and writable since it is zeroed out after the password is
|
|
|
|
read.
|
|
|
|
|
2013-03-03 14:32:14 +00:00
|
|
|
'-k <key length>'
|
|
|
|
Specify the key length. Can be 16 for 128 bit keys or 32 for 256 bit
|
2013-03-05 15:37:54 +00:00
|
|
|
keys. Default value is 32 for 256 bit keys.
|
2013-03-03 14:32:14 +00:00
|
|
|
|
2012-10-15 06:40:00 +00:00
|
|
|
NOTE: When using pipe-mode via -p the only way to provide a password is to use '-w'.
|
|
|
|
|
2012-07-28 18:25:24 +00:00
|
|
|
Environment Variables
|
|
|
|
=====================
|
|
|
|
|
|
|
|
Set ALLOCATOR_BYPASS=1 in the environment to avoid using the the built-in
|
|
|
|
allocator. Due to the the way it rounds up an allocation request to the nearest
|
2012-08-18 16:30:14 +00:00
|
|
|
slab the built-in allocator can allocate extra unused memory. In addition you
|
|
|
|
may want to use a different allocator in your environment.
|
2012-07-28 18:25:24 +00:00
|
|
|
|
2013-04-26 13:02:00 +00:00
|
|
|
The variable PCOMPRESS_INDEX_MEM can be set to limit memory used by the Global
|
|
|
|
Deduplication Index. The number specified is in multiples of a megabyte.
|
|
|
|
|
|
|
|
The variable PCOMPRESS_CACHE_DIR can point to a directory where some temporary
|
|
|
|
files relating to the Global Deduplication process can be stored. This for example
|
|
|
|
can be a directory on a Solid State Drive to speed up Global Deduplication. The
|
|
|
|
space used in this directory is proportional to the size of the dataset being
|
|
|
|
processed and is slightly more than 8KB for every 1MB of data.
|
|
|
|
|
2013-05-06 18:20:56 +00:00
|
|
|
The default checksum used for block hashes during Global Deduplication is SHA256.
|
2013-05-01 18:35:05 +00:00
|
|
|
However this can be changed by setting the PCOMPRESS_CHUNK_HASH_GLOBAL environment
|
2013-05-06 18:20:56 +00:00
|
|
|
variable. The list of allowed checksums for this is:
|
|
|
|
|
|
|
|
SHA256 , SHA512
|
|
|
|
KECCAK256, KECCAK512
|
|
|
|
BLAKE256 , BLAKE512
|
|
|
|
SKEIN256 , SKEIN512
|
|
|
|
|
|
|
|
Even though SKEIN is not supported as a chunk checksum (not deemed necessary
|
|
|
|
because BLAKE2 is available) it can be used as a dedupe block checksum. One may
|
|
|
|
ask why? The reasoning is we depend on hashes to find duplicate blocks. Now SHA256
|
|
|
|
is the default because it is known to be robust and unbroken till date. Proven as
|
|
|
|
yet in the field. However one may want a faster alternative so we have choices
|
|
|
|
from the NIST SHA3 finalists in the form of SKEIN and BLAKE which are neck to
|
|
|
|
neck with SKEIN getting an edge. SKEIN and BLAKE have seen extensive cryptanalysis
|
|
|
|
in the intervening years and are unbroken with only marginal theoretical issues
|
|
|
|
determined. BLAKE2 is a derivative of BLAKE and is tremendously fast but has not
|
|
|
|
seen much specific cryptanalysis as yet, even though it is not new but just a
|
|
|
|
performance optimized derivate. So cryptanalysis that applies to BLAKE should
|
|
|
|
also apply and justify BLAKE2. However the paranoid may well trust SKEIN a bit
|
|
|
|
more than BLAKE2 and SKEIN while not being as fast as BLAKE2 is still a lot faster
|
|
|
|
than SHA2.
|
2013-05-01 18:35:05 +00:00
|
|
|
|
2012-07-27 16:33:24 +00:00
|
|
|
Examples
|
|
|
|
========
|
|
|
|
|
|
|
|
Compress "file.tar" using bzip2 level 6, 64MB chunk size and use 4 threads. In
|
2012-09-15 05:44:58 +00:00
|
|
|
addition perform identity deduplication and delta compression prior to compression.
|
2012-07-27 16:33:24 +00:00
|
|
|
|
|
|
|
pcompress -D -E -c bzip2 -l6 -s64m -t4 file.tar
|
|
|
|
|
|
|
|
Compress "file.tar" using extreme compression mode of LZMA and a chunk size of
|
|
|
|
of 1GB. Allow pcompress to detect the number of CPU cores and use as many threads.
|
|
|
|
|
|
|
|
pcompress -c lzma -l14 -s1g file.tar
|
|
|
|
|
2012-10-21 07:04:34 +00:00
|
|
|
Compress "file.tar" using lz4 at max compression with LZ-Prediction pre-processing
|
|
|
|
and encryption enabled. Chunksize is 100M:
|
|
|
|
|
|
|
|
pcompress -c lz4 -l3 -e -L -s100m file.tar
|
|
|
|
|
2012-07-28 18:25:24 +00:00
|
|
|
Compression Algorithms
|
|
|
|
======================
|
|
|
|
|
|
|
|
LZFX - Ultra Fast, average compression. This algorithm is the fastest overall.
|
|
|
|
Levels: 1 - 5
|
|
|
|
LZ4 - Very Fast, better compression than LZFX.
|
|
|
|
Levels: 1 - 3
|
|
|
|
Zlib - Fast, better compression.
|
|
|
|
Levels: 1 - 9
|
|
|
|
Bzip2 - Slow, much better compression than Zlib.
|
|
|
|
Levels: 1 - 9
|
2012-07-31 15:37:35 +00:00
|
|
|
|
2012-07-28 18:25:24 +00:00
|
|
|
LZMA - Very slow. Extreme compression.
|
|
|
|
Levels: 1 - 14
|
2012-07-31 15:37:35 +00:00
|
|
|
Till level 9 it is standard LZMA parameters. Levels 10 - 12 use
|
|
|
|
more memory and higher match iterations so are slower. Levels
|
|
|
|
13 and 14 use larger dictionaries upto 256MB and really suck up
|
|
|
|
RAM. Use these levels only if you have at the minimum 4GB RAM on
|
|
|
|
your system.
|
|
|
|
|
2012-07-28 18:25:24 +00:00
|
|
|
PPMD - Slow. Extreme compression for Text, average compression for binary.
|
2012-08-12 07:36:49 +00:00
|
|
|
In addition PPMD decompression time is also high for large chunks.
|
|
|
|
This requires lots of RAM similar to LZMA.
|
2012-07-28 18:25:24 +00:00
|
|
|
Levels: 1 - 14.
|
|
|
|
|
2012-12-15 07:12:46 +00:00
|
|
|
Adapt - Synthetic mode with text/binary detection. For pure text data PPMD is
|
|
|
|
used otherwise Bzip2 is selected per chunk.
|
2012-07-28 18:25:24 +00:00
|
|
|
Levels: 1 - 14
|
2012-12-15 07:12:46 +00:00
|
|
|
Adapt2 - Slower synthetic mode. For pure text data PPMD is otherwise LZMA is
|
|
|
|
applied. Can give very good compression ratio when splitting file
|
|
|
|
into multiple chunks.
|
2012-07-28 18:25:24 +00:00
|
|
|
Levels: 1 - 14
|
2012-07-31 15:37:35 +00:00
|
|
|
Since both LZMA and PPMD are used together memory requirements are
|
2012-12-15 07:12:46 +00:00
|
|
|
large especially if you are also using extreme levels above 10. For
|
|
|
|
example with 100MB chunks, Level 14, 2 threads and with or without
|
|
|
|
dedupe, it uses upto 2.5GB physical RAM (RSS).
|
2012-07-28 18:25:24 +00:00
|
|
|
|
|
|
|
It is possible for a single chunk to span the entire file if enough RAM is
|
|
|
|
available. However for adaptive modes to be effective for large files, especially
|
|
|
|
multi-file archives splitting into chunks is required so that best compression
|
|
|
|
algorithm can be selected for textual and binary portions.
|
|
|
|
|
2012-12-15 07:12:46 +00:00
|
|
|
Pre-Processing Algorithms
|
|
|
|
=========================
|
2012-12-16 17:47:04 +00:00
|
|
|
As can be seen above a multitude of pre-processing algorithms are available that
|
|
|
|
provide further compression effectiveness beyond what the usual compression
|
|
|
|
algorithms can achieve by themselves. These are summarized below:
|
2012-12-15 07:12:46 +00:00
|
|
|
|
2012-12-16 17:47:04 +00:00
|
|
|
1) Deduplication : Per-Chunk (or per-segment) deduplication based on Rabin
|
|
|
|
fingerprinting.
|
2012-12-15 16:39:07 +00:00
|
|
|
|
2012-12-16 17:47:04 +00:00
|
|
|
2) Delta Compression : A similarity based (minhash) comparison of Rabin blocks. Two
|
|
|
|
blocks at least 60% similar with each other are diffed using
|
|
|
|
bsdiff.
|
2012-12-15 16:39:07 +00:00
|
|
|
|
2012-12-16 17:47:04 +00:00
|
|
|
3) LZP : LZ Prediction is a variant of LZ77 that replaces repeating
|
|
|
|
runs of text with shorter codes.
|
2012-12-15 16:39:07 +00:00
|
|
|
|
2012-12-16 17:47:04 +00:00
|
|
|
4) Adaptive Delta : This is a simple form of Delta Encoding where arithmetic
|
|
|
|
progressions are detected in the data stream and collapsed
|
|
|
|
via Run-Length encoding.
|
|
|
|
|
|
|
|
4) Matrix Transpose : This is used automatically in Delta Encoding and Deduplication.
|
|
|
|
This attempts to transpose columnar repeating sequences of
|
|
|
|
bytes into row-wise sequences so that compression algorithms
|
|
|
|
can work better.
|
2012-12-15 07:12:46 +00:00
|
|
|
|
|
|
|
Memory Usage
|
|
|
|
============
|
|
|
|
As can be seen from above memory usage can vary greatly based on compression/
|
|
|
|
pre-processing algorithms and chunk size. A variety of configurations are possible
|
|
|
|
depending on resource availability in the system.
|
|
|
|
|
|
|
|
The minimum possible meaningful settings while still giving about 50% compression
|
|
|
|
ratio and very high speed is with the LZFX algorithm with 1MB chunk size and 2
|
|
|
|
threads:
|
|
|
|
|
|
|
|
pcompress -c lzfx -l2 -s1m -t2 <file>
|
|
|
|
|
|
|
|
This uses about 6MB of physical RAM (RSS). Earlier versions of the utility before
|
|
|
|
the 0.9 release comsumed much more memory. This was improved in the later versions.
|
|
|
|
When using Linux the virtual memory consumption may appear to be very high but it
|
|
|
|
is just address space usage rather than actual RAM and should be ignored. It is only
|
|
|
|
the RSS that matters. This is a result of the memory arena mechanism in Glibc that
|
|
|
|
improves malloc() performance for multi-threaded applications.
|
|
|
|
|
2012-07-28 18:25:24 +00:00
|
|
|
|