Update README to reflect current features.
This commit is contained in:
parent
05a010a9dd
commit
117382c141
1 changed files with 12 additions and 10 deletions
22
README.md
22
README.md
|
@ -6,16 +6,18 @@ Use is subject to license terms.
|
||||||
|
|
||||||
Pcompress is a utility to do compression and decompression in parallel by
|
Pcompress is a utility to do compression and decompression in parallel by
|
||||||
splitting input data into chunks. It has a modular structure and includes
|
splitting input data into chunks. It has a modular structure and includes
|
||||||
support for multiple algorithms like LZMA, Bzip2, PPMD, etc., with CRC64
|
support for multiple algorithms like LZMA, Bzip2, PPMD, etc, with SKEIN
|
||||||
chunk checksums. SSE optimizations for the bundled LZMA are included. It
|
checksums for data integrity. It can also do Lempel-Ziv pre-compression
|
||||||
also implements chunk-level Content-Aware Deduplication and Delta
|
(derived from libbsc) to improve compression ratios across the board. SSE
|
||||||
Compression features based on a Semi-Rabin Fingerprinting scheme. Delta
|
optimizations for the bundled LZMA are included. It also implements
|
||||||
Compression is implemented via the widely popular bsdiff algorithm.
|
chunk-level Content-Aware Deduplication and Delta Compression features
|
||||||
Similarity is detected using a custom hashing of maximal features of a
|
based on a Semi-Rabin Fingerprinting scheme. Delta Compression is done
|
||||||
block. When doing chunk-level dedupe it attempts to merge adjacent
|
via the widely popular bsdiff algorithm. Similarity is detected using a
|
||||||
non-duplicate blocks index entries into a single larger entry to reduce
|
custom hashing of maximal features of a block. When doing chunk-level
|
||||||
metadata. In addition to all these it can internally split chunks at
|
dedupe it attempts to merge adjacent non-duplicate blocks index entries
|
||||||
rabin boundaries to help dedupe and compression.
|
into a single larger entry to reduce metadata. In addition to all these it
|
||||||
|
can internally split chunks at rabin boundaries to help dedupe and
|
||||||
|
compression.
|
||||||
|
|
||||||
It has low metadata overhead and overlaps I/O and compression to achieve
|
It has low metadata overhead and overlaps I/O and compression to achieve
|
||||||
maximum parallelism. It also bundles a simple slab allocator to speed
|
maximum parallelism. It also bundles a simple slab allocator to speed
|
||||||
|
|
Loading…
Reference in a new issue