blob: 8c4fab1f0c95bbbf0251071e6cf303552f773671 [file] [log] [blame]
bzip2(1) bzip2(1)
NNAAMMEE
bzip2, bunzip2 - a block-sorting file compressor, v0.9.0
bzcat - decompresses files to stdout
bzip2recover - recovers data from damaged bzip2 files
SSYYNNOOPPSSIISS
bbzziipp22 [ --ccddffkkssttvvzzVVLL112233445566778899 ] [ _f_i_l_e_n_a_m_e_s _._._. ]
bbuunnzziipp22 [ --ffkkvvssVVLL ] [ _f_i_l_e_n_a_m_e_s _._._. ]
bbzzccaatt [ --ss ] [ _f_i_l_e_n_a_m_e_s _._._. ]
bbzziipp22rreeccoovveerr _f_i_l_e_n_a_m_e
DDEESSCCRRIIPPTTIIOONN
_b_z_i_p_2 compresses files using the Burrows-Wheeler block-
sorting text compression algorithm, and Huffman coding.
Compression is generally considerably better than that
achieved by more conventional LZ77/LZ78-based compressors,
and approaches the performance of the PPM family of sta-
tistical compressors.
The command-line options are deliberately very similar to
those of _G_N_U _G_z_i_p_, but they are not identical.
_b_z_i_p_2 expects a list of file names to accompany the com-
mand-line flags. Each file is replaced by a compressed
version of itself, with the name "original_name.bz2".
Each compressed file has the same modification date and
permissions as the corresponding original, so that these
properties can be correctly restored at decompression
time. File name handling is naive in the sense that there
is no mechanism for preserving original file names, per-
missions and dates in filesystems which lack these con-
cepts, or have serious file name length restrictions, such
as MS-DOS.
_b_z_i_p_2 and _b_u_n_z_i_p_2 will by default not overwrite existing
files; if you want this to happen, specify the -f flag.
If no file names are specified, _b_z_i_p_2 compresses from
standard input to standard output. In this case, _b_z_i_p_2
will decline to write compressed output to a terminal, as
this would be entirely incomprehensible and therefore
pointless.
_b_u_n_z_i_p_2 (or _b_z_i_p_2 _-_d ) decompresses and restores all spec-
ified files whose names end in ".bz2". Files without this
suffix are ignored. Again, supplying no filenames causes
decompression from standard input to standard output.
_b_u_n_z_i_p_2 will correctly decompress a file which is the con-
catenation of two or more compressed files. The result is
the concatenation of the corresponding uncompressed files.
Integrity testing (-t) of concatenated compressed files is
1
bzip2(1) bzip2(1)
also supported.
You can also compress or decompress files to the standard
output by giving the -c flag. Multiple files may be com-
pressed and decompressed like this. The resulting outputs
are fed sequentially to stdout. Compression of multiple
files in this manner generates a stream containing multi-
ple compressed file representations. Such a stream can be
decompressed correctly only by _b_z_i_p_2 version 0.9.0 or
later. Earlier versions of _b_z_i_p_2 will stop after decom-
pressing the first file in the stream.
_b_z_c_a_t (or _b_z_i_p_2 _-_d_c ) decompresses all specified files to
the standard output.
Compression is always performed, even if the compressed
file is slightly larger than the original. Files of less
than about one hundred bytes tend to get larger, since the
compression mechanism has a constant overhead in the
region of 50 bytes. Random data (including the output of
most file compressors) is coded at about 8.05 bits per
byte, giving an expansion of around 0.5%.
As a self-check for your protection, _b_z_i_p_2 uses 32-bit
CRCs to make sure that the decompressed version of a file
is identical to the original. This guards against corrup-
tion of the compressed data, and against undetected bugs
in _b_z_i_p_2 (hopefully very unlikely). The chances of data
corruption going undetected is microscopic, about one
chance in four billion for each file processed. Be aware,
though, that the check occurs upon decompression, so it
can only tell you that that something is wrong. It can't
help you recover the original uncompressed data. You can
use _b_z_i_p_2_r_e_c_o_v_e_r to try to recover data from damaged
files.
Return values: 0 for a normal exit, 1 for environmental
problems (file not found, invalid flags, I/O errors, &c),
2 to indicate a corrupt compressed file, 3 for an internal
consistency error (eg, bug) which caused _b_z_i_p_2 to panic.
MMEEMMOORRYY MMAANNAAGGEEMMEENNTT
_B_z_i_p_2 compresses large files in blocks. The block size
affects both the compression ratio achieved, and the
amount of memory needed both for compression and decom-
pression. The flags -1 through -9 specify the block size
to be 100,000 bytes through 900,000 bytes (the default)
respectively. At decompression-time, the block size used
for compression is read from the header of the compressed
file, and _b_u_n_z_i_p_2 then allocates itself just enough memory
to decompress the file. Since block sizes are stored in
compressed files, it follows that the flags -1 to -9 are
irrelevant to and so ignored during decompression.
2
bzip2(1) bzip2(1)
Compression and decompression requirements, in bytes, can
be estimated as:
Compression: 400k + ( 7 x block size )
Decompression: 100k + ( 4 x block size ), or
100k + ( 2.5 x block size )
Larger block sizes give rapidly diminishing marginal
returns; most of the compression comes from the first two
or three hundred k of block size, a fact worth bearing in
mind when using _b_z_i_p_2 on small machines. It is also
important to appreciate that the decompression memory
requirement is set at compression-time by the choice of
block size.
For files compressed with the default 900k block size,
_b_u_n_z_i_p_2 will require about 3700 kbytes to decompress. To
support decompression of any file on a 4 megabyte machine,
_b_u_n_z_i_p_2 has an option to decompress using approximately
half this amount of memory, about 2300 kbytes. Decompres-
sion speed is also halved, so you should use this option
only where necessary. The relevant flag is -s.
In general, try and use the largest block size memory con-
straints allow, since that maximises the compression
achieved. Compression and decompression speed are virtu-
ally unaffected by block size.
Another significant point applies to files which fit in a
single block -- that means most files you'd encounter
using a large block size. The amount of real memory
touched is proportional to the size of the file, since the
file is smaller than a block. For example, compressing a
file 20,000 bytes long with the flag -9 will cause the
compressor to allocate around 6700k of memory, but only
touch 400k + 20000 * 7 = 540 kbytes of it. Similarly, the
decompressor will allocate 3700k but only touch 100k +
20000 * 4 = 180 kbytes.
Here is a table which summarises the maximum memory usage
for different block sizes. Also recorded is the total
compressed size for 14 files of the Calgary Text Compres-
sion Corpus totalling 3,141,622 bytes. This column gives
some feel for how compression varies with block size.
These figures tend to understate the advantage of larger
block sizes for larger files, since the Corpus is domi-
nated by smaller files.
Compress Decompress Decompress Corpus
Flag usage usage -s usage Size
-1 1100k 500k 350k 914704
-2 1800k 900k 600k 877703
3
bzip2(1) bzip2(1)
-3 2500k 1300k 850k 860338
-4 3200k 1700k 1100k 846899
-5 3900k 2100k 1350k 845160
-6 4600k 2500k 1600k 838626
-7 5400k 2900k 1850k 834096
-8 6000k 3300k 2100k 828642
-9 6700k 3700k 2350k 828642
OOPPTTIIOONNSS
--cc ----ssttddoouutt
Compress or decompress to standard output. -c will
decompress multiple files to stdout, but will only
compress a single file to stdout.
--dd ----ddeeccoommpprreessss
Force decompression. _b_z_i_p_2_, _b_u_n_z_i_p_2 and _b_z_c_a_t are
really the same program, and the decision about
what actions to take is done on the basis of which
name is used. This flag overrides that mechanism,
and forces _b_z_i_p_2 to decompress.
--zz ----ccoommpprreessss
The complement to -d: forces compression, regard-
less of the invokation name.
--tt ----tteesstt
Check integrity of the specified file(s), but don't
decompress them. This really performs a trial
decompression and throws away the result.
--ff ----ffoorrccee
Force overwrite of output files. Normally, _b_z_i_p_2
will not overwrite existing output files.
--kk ----kkeeeepp
Keep (don't delete) input files during compression
or decompression.
--ss ----ssmmaallll
Reduce memory usage, for compression, decompression
and testing. Files are decompressed and tested
using a modified algorithm which only requires 2.5
bytes per block byte. This means any file can be
decompressed in 2300k of memory, albeit at about
half the normal speed.
During compression, -s selects a block size of
200k, which limits memory use to around the same
figure, at the expense of your compression ratio.
In short, if your machine is low on memory (8
megabytes or less), use -s for everything. See
MEMORY MANAGEMENT above.
4
bzip2(1) bzip2(1)
--vv ----vveerrbboossee
Verbose mode -- show the compression ratio for each
file processed. Further -v's increase the ver-
bosity level, spewing out lots of information which
is primarily of interest for diagnostic purposes.
--LL ----lliicceennssee --VV ----vveerrssiioonn
Display the software version, license terms and
conditions.
--11 ttoo --99
Set the block size to 100 k, 200 k .. 900 k when
compressing. Has no effect when decompressing.
See MEMORY MANAGEMENT above.
----rreeppeettiittiivvee--ffaasstt
_b_z_i_p_2 injects some small pseudo-random variations
into very repetitive blocks to limit worst-case
performance during compression. If sorting runs
into difficulties, the block is randomised, and
sorting is restarted. Very roughly, _b_z_i_p_2 persists
for three times as long as a well-behaved input
would take before resorting to randomisation. This
flag makes it give up much sooner.
----rreeppeettiittiivvee--bbeesstt
Opposite of --repetitive-fast; try a lot harder
before resorting to randomisation.
RREECCOOVVEERRIINNGG DDAATTAA FFRROOMM DDAAMMAAGGEEDD FFIILLEESS
_b_z_i_p_2 compresses files in blocks, usually 900kbytes long.
Each block is handled independently. If a media or trans-
mission error causes a multi-block .bz2 file to become
damaged, it may be possible to recover data from the
undamaged blocks in the file.
The compressed representation of each block is delimited
by a 48-bit pattern, which makes it possible to find the
block boundaries with reasonable certainty. Each block
also carries its own 32-bit CRC, so damaged blocks can be
distinguished from undamaged ones.
_b_z_i_p_2_r_e_c_o_v_e_r is a simple program whose purpose is to
search for blocks in .bz2 files, and write each block out
into its own .bz2 file. You can then use _b_z_i_p_2 _-_t to test
the integrity of the resulting files, and decompress those
which are undamaged.
_b_z_i_p_2_r_e_c_o_v_e_r takes a single argument, the name of the dam-
aged file, and writes a number of files "rec0001file.bz2",
"rec0002file.bz2", etc, containing the extracted blocks.
The output filenames are designed so that the use of
5
bzip2(1) bzip2(1)
wildcards in subsequent processing -- for example, "bzip2
-dc rec*file.bz2 > recovered_data" -- lists the files in
the "right" order.
_b_z_i_p_2_r_e_c_o_v_e_r should be of most use dealing with large .bz2
files, as these will contain many blocks. It is clearly
futile to use it on damaged single-block files, since a
damaged block cannot be recovered. If you wish to min-
imise any potential data loss through media or transmis-
sion errors, you might consider compressing with a smaller
block size.
PPEERRFFOORRMMAANNCCEE NNOOTTEESS
The sorting phase of compression gathers together similar
strings in the file. Because of this, files containing
very long runs of repeated symbols, like "aabaabaabaab
..." (repeated several hundred times) may compress
extraordinarily slowly. You can use the -vvvvv option to
monitor progress in great detail, if you want. Decompres-
sion speed is unaffected.
Such pathological cases seem rare in practice, appearing
mostly in artificially-constructed test files, and in low-
level disk images. It may be inadvisable to use _b_z_i_p_2 to
compress the latter. If you do get a file which causes
severe slowness in compression, try making the block size
as small as possible, with flag -1.
_b_z_i_p_2 usually allocates several megabytes of memory to
operate in, and then charges all over it in a fairly ran-
dom fashion. This means that performance, both for com-
pressing and decompressing, is largely determined by the
speed at which your machine can service cache misses.
Because of this, small changes to the code to reduce the
miss rate have been observed to give disproportionately
large performance improvements. I imagine _b_z_i_p_2 will per-
form best on machines with very large caches.
CCAAVVEEAATTSS
I/O error messages are not as helpful as they could be.
_B_z_i_p_2 tries hard to detect I/O errors and exit cleanly,
but the details of what the problem is sometimes seem
rather misleading.
This manual page pertains to version 0.9.0 of _b_z_i_p_2_. Com-
pressed data created by this version is entirely forwards
and backwards compatible with the previous public release,
version 0.1pl2, but with the following exception: 0.9.0
can correctly decompress multiple concatenated compressed
files. 0.1pl2 cannot do this; it will stop after decom-
pressing just the first file in the stream.
6
bzip2(1) bzip2(1)
Wildcard expansion for Windows 95 and NT is flaky.
_b_z_i_p_2_r_e_c_o_v_e_r uses 32-bit integers to represent bit posi-
tions in compressed files, so it cannot handle compressed
files more than 512 megabytes long. This could easily be
fixed.
AAUUTTHHOORR
Julian Seward, jseward@acm.org.
http://www.muraroa.demon.co.uk
The ideas embodied in _b_z_i_p_2 are due to (at least) the fol-
lowing people: Michael Burrows and David Wheeler (for the
block sorting transformation), David Wheeler (again, for
the Huffman coder), Peter Fenwick (for the structured cod-
ing model in the original _b_z_i_p_, and many refinements), and
Alistair Moffat, Radford Neal and Ian Witten (for the
arithmetic coder in the original _b_z_i_p_)_. I am much
indebted for their help, support and advice. See the man-
ual in the source distribution for pointers to sources of
documentation. Christian von Roques encouraged me to look
for faster sorting algorithms, so as to speed up compres-
sion. Bela Lubkin encouraged me to improve the worst-case
compression performance. Many people sent patches, helped
with portability problems, lent machines, gave advice and
were generally helpful.
7